Philosophy professors fail on basic philosophy problems

post by Shmi (shminux) · 2015-07-15T18:41:06.473Z · LW · GW · Legacy · 107 comments

Contents

107 comments

Imagine someone finding out that "Physics professors fail on basic physics problems". This, of course, would never happen. To become a physicist in academia, one has to (among million other things) demonstrate proficiency on far harder problems than that.

Philosophy professors, however, are a different story. Cosmologist Sean Carroll tweeted a link to a paper from the Harvard Moral Psychology Research Lab, which found that professional moral philosophers are no less subject to the effects of framing and order of presentation on the Trolley Problem than non-philosophers. This seems as basic an error as, say, confusing energy with momentum, or mixing up units on a physics test.

Abstract:

We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.

Some quotes (emphasis mine):

When scenario pairs were presented in order AB, participants responded differently than when the same scenario pairs were presented in order BA, and the philosophers showed no less of a shift than did the comparison groups, across several types of scenario.

[...] we could find no level of philosophical expertise that reduced the size of the order effects or the framing effects on judgments of specific cases. Across the board, professional philosophers (94% with PhD’s) showed about the same size order and framing effects as similarly educated non-philosophers. Nor were order effects and framing effects reduced by assignment to a condition enforcing a delay before responding and encouraging participants to reflect on “different variants of the scenario or different ways of describing the case”. Nor were order effects any smaller for the majority of philosopher participants reporting antecedent familiarity with the issues. Nor were order effects any smaller for the minority of philosopher participants reporting expertise on the very issues under investigation. Nor were order effects any smaller for the minority of philosopher participants reporting that before participating in our experiment they had stable views about the issues under investigation.

I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?

 

107 comments

Comments sorted by top scores.

comment by Protagoras · 2015-07-15T20:23:01.459Z · LW(p) · GW(p)

I was under the impression that the research into biases by people like Kahneman and Tversky generally found that eliminating them was incredibly hard, and that expertise, and even familiarity with the biases in question generally didn't help at all. So this is not a particularly surprising result; what would be more interesting is if they had found anything that actually does reduce the effect of the biases.

Replies from: DanArmak, shminux
comment by DanArmak · 2015-07-16T15:06:10.634Z · LW(p) · GW(p)

Overcoming these biases is very easy if you have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.

Mathematicians aren't biased by being told "I colored 200 of 600 balls black" vs. "I colored all but 400 of 600 balls black", because the question "how to color the most balls" has a correct answer in the model used. This is true even if the model is unique to the mathematician answering the question: the most important thing is consistency.

If a moral theory can't prove the correctness of an answer to a very simple problem - a choice between just two alternatives, trading off in clearly morally significant issues (lives), without any complications (e.g. the different people who may die don't have any distinguishing features) - then it probably doesn't give clear answers to most other problems too, so what use is it?

Replies from: TAG, None
comment by TAG · 2018-01-24T12:38:35.101Z · LW(p) · GW(p)

If a moral theory can't be proved correct in itself, what use is it? Given that theories are tested against intition, and that no theory has been shown to be completely satisfactory, it makes sense to use intuition directly.

comment by [deleted] · 2015-07-16T15:11:00.661Z · LW(p) · GW(p)

Moral theories predict feelings, mathemathical theories predict different things. Moral philosophy assumes you already know genocide is wrong and it tries to figure out how your subconscious generates this feeling: http://lesswrong.com/lw/m8y/dissolving_philosophy/

Replies from: DanArmak, Creutzer
comment by DanArmak · 2015-07-16T15:22:54.668Z · LW(p) · GW(p)

Moral theories predict feelings

Are you saying that because people are affected by a bias, a moral theory that correctly predicts their feelings must be affected by the bias in the same way?

This would preclude (or falsify) many actual moral theories on the grounds that most people find them un-intuitive or simply wrong. I think most moral philosophers aren't looking for this kind of theory, because if they were, they would agree much more by now: it shouldn't take thousands of years to empirically discover how average people feel about proposed moral problems!

Replies from: None, Luke_A_Somers
comment by [deleted] · 2015-07-17T07:16:06.906Z · LW(p) · GW(p)

the same way

No - the feelings are not a truth-seeking device so bias is not applicable: they are part of the terrain.

it shouldn't take thousands of years to empirically discover how average people feel about proposed moral problems!

It is not like that they were working on it every day for thousands of years. I.e. in the Christian period it mattered more what god says about morals than how people feel about it. Fairly big gaps. There is a classical era and modern era, the two adds up to a few hundreds of years with all sorts of gaps.

IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest - yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.

Replies from: DanArmak
comment by DanArmak · 2015-07-17T12:06:00.933Z · LW(p) · GW(p)

IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest - yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.

So is philosophy trying to describe moral feelings, inconsistent and biased as they are? Or is it trying to propose explicit moral rules and convince people to follow them even when they go against their feelings? Or both?

If moral philosophers are affected by presentation bias, that means they aren't reasoning according to explicit rules. Are they trying to predict the moral feelings of others (who? the average person?)

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-17T15:52:39.128Z · LW(p) · GW(p)

If their meta level reasoning , their actual job, hasnt told them which rules to follow, .or has told them not to follow rules, why should they follow rules?

Replies from: DanArmak
comment by DanArmak · 2015-07-17T16:44:45.919Z · LW(p) · GW(p)

By "rules" I meant what the parent comment referred to as trying to "algorithmize" moral feelings.

Moral philosophers are presumably trying to answer some class of questions. These may be "what is the morally right choice?" or "what moral choice do people actually make?" or some other thing. But whatever it is, they should be consistent. If a philosopher might give a different answer every time the same question is asked of them, then surely they can't accomplish anything useful. And to be consistent, they must follow rules, i.e. have a deterministic decision process.

These rules may not be explicitly known to themselves, but if they are in fact consistent, other people could study the answers they give and deduce these rules. The problem presented by the OP is that they are in fact giving inconsistent answers; either that, or they all happen to disagree with one another in just the way that the presentation bias would predict in this case.

A possible objection is that the presentation is an input which is allowed to affect the (correct) response. But every problem statement has some irrelevant context. No-one would argue that a moral problem might have different answers between and 2 and 3 AM, or that the solution to a moral problem should depend on the accent of the interviewer. And to understand what the problem being posed actually is (i.e. to correctly pose the same problem to different people), we need to know what is and isn't relevant.

In this case, the philosophers act as if the choice of phrasing "200 of 600 live" vs. "400 of 600 die" is relevant to the problem. If we accepted this conclusion, we might well ask ourselves what else is relevant. Maybe one shouldn't be a consequentialist between 2 and 3 AM?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-17T18:54:50.796Z · LW(p) · GW(p)

You haven't shown that they are producing inconsistent theories in their published work. The result only shows that, like scientists, individual philosophers can't live up to their own cognitive standards in certain situations.

Replies from: DanArmak
comment by DanArmak · 2015-07-17T21:11:12.782Z · LW(p) · GW(p)

This is true. But it is significant evidence that they are inconsistent in their work too, absent an objective standard by which their work can be judged.

comment by Luke_A_Somers · 2015-07-16T21:43:40.436Z · LW(p) · GW(p)

It can be hard to find a formalization of the empirical systems, though. Especially since formalizing is going to be very complicated and muddy in a lot of cases. That'll cover a lot of '... and therefore, the right answer emerges'. Not all, to be sure, but a fair amount.

comment by Creutzer · 2015-07-18T07:41:17.345Z · LW(p) · GW(p)

Moral theories predict feelings

No. This is what theories of moral psychology do. Philosophical ethicists do not consider themselves to be in the same business.

comment by Shmi (shminux) · 2015-07-15T22:10:13.401Z · LW(p) · GW(p)

I would assume that detecting the danger of the framing bias, such as "200 of 600 people will be saved" vs "400 of 600 people will die" is elementary enough and so is something an aspired moral philosopher ought to learn to recognize and avoid before she can be allowed to practice in the field. Otherwise all their research is very much suspect.

Replies from: ChristianKl, pragmatist
comment by ChristianKl · 2015-07-15T22:58:33.558Z · LW(p) · GW(p)

Being able to detect a bias and actually being able to circumvent it are two different skills.

comment by pragmatist · 2015-07-17T09:03:19.905Z · LW(p) · GW(p)

Realize what's occurring here, though. It's not that individual philosophers are being asked the question both ways and are answering differently in each case. That would be an egregious error that one would hope philosophical training would allay. What's actually happening is that when philosophers are presented with the "save" formulation (but not the "die" formulation) they react differently than when they are presented with the "die" formulation (but not the "save" formulation). This is an error, but also an extremely insidious error, and one that is hard to correct for. I mean, I'm perfectly aware of the error, I know I wouldn't give conflicting responses if presented with both options, but I am also reasonably confident that I would in fact make the error if presented with just one option. My responses in that case would quite probably be different than in the counterfactual where I was only provided with the other option. In each case, if you subsequently presented me with the second framing, I would immediately recognize that I ought to give the same answer as I gave for the first framing, but what that answer is would, I anticipate, be impacted by the initial framing.

I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment. I mean, I could, when presented with the "save" formulation, think to myself "What would I say in the 'die' formulation?" before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the "die" formulation in the first place.

Replies from: shminux
comment by Shmi (shminux) · 2015-07-17T15:37:02.416Z · LW(p) · GW(p)

Thanks, that makes sense.

I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment.

Do you think that this is what utilitarianism is, or ought to be?

I mean, I could, when presented with the "save" formulation, think to myself "What would I say in the 'die' formulation?" before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the "die" formulation in the first place.

So, do you think that, absent a formal algorithm, when presented with a "save" formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the "die" formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases? If so, has this approach been advocated by a moral philosopher you know of?

Replies from: pragmatist
comment by pragmatist · 2015-07-18T03:46:30.510Z · LW(p) · GW(p)

Do you think that this is what utilitarianism is, or ought to be?

Utilitarianism does offer the possibility of a precise, algorithmic approach to morality, but we don't have anything close to that as of now. People disagree about what "utility" is, how it should be measured, and how it should be aggregated. And of course, even if they did agree, actually performing the calculation in most realistic cases would require powers of prediction and computation well beyond our abilities.

The reason I used the phrase "artificially created", though, is that I think any attempt at systematization, utilitarianism included, will end up doing considerable violence to our moral intuitions. Our moral sensibilities are the product of a pretty hodge-podge process of evolution and cultural assimilation, so I don't think there's any reason to expect them to be neatly systematizable. One response is that the benefits of having a system (such as bias mitigation) are strong enough to justify biting the bullet, but I'm not sure that's the right way to think about morality, especially if you're a moral realist. In science, it might often be worthwhile using a simplified model even though you know there is a cost in terms of accuracy. In moral reasoning, though, it seems weird to say "I know this model doesn't always correctly distinguish right from wrong, but its simplicity and precision outweigh that cost".

So, do you think that, absent a formal algorithm, when presented with a "save" formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the "die" formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases?

Something like this might be useful, but I'm not at all confident it would work. Sounds like another research project for the Harvard Moral Psychology Research Lab. I'm not aware of any moral philosopher proposing something along these lines, but I'm not extremely familiar with that literature. I do philosophy of science, not moral philosophy.

comment by hairyfigment · 2015-07-15T19:54:19.108Z · LW(p) · GW(p)

I find this amusing and slightly disturbing - but the Trolley Problem seems like a terrible example. A rational person might answer based on political considerations, which "order effects" might change in everyday conversations.

Replies from: DanArmak
comment by DanArmak · 2015-07-16T14:47:50.139Z · LW(p) · GW(p)

Are you suggesting that moral philosophers, quizzed about their viewpoints on moral issues, answer non-truthfully in order to be politically correct or to avoid endorsing unpopular moral views?

If true, then we shouldn't listen to anything moral philosophers ever say about their subject.

Replies from: hairyfigment
comment by hairyfigment · 2015-07-16T18:26:13.899Z · LW(p) · GW(p)

Very possibly. But I'm saying this seems more likely to happen with the Trolley Problem than with most philosophical questions, and even many disputed moral questions. It's not a question of "endorsing unpopular moral views" in some abstract sense, but the social message that even a smart human being might take from the statement in an ordinary conversation.

comment by John_Maxwell (John_Maxwell_IV) · 2015-07-15T23:20:06.258Z · LW(p) · GW(p)

So I'm guessing LW would also fail the problems.

Replies from: Squark, shminux
comment by Squark · 2015-07-16T19:02:24.030Z · LW(p) · GW(p)

Anyone wants to organize an experiment?

Replies from: gwern
comment by Shmi (shminux) · 2015-07-15T23:43:01.242Z · LW(p) · GW(p)

Not sure, possibly. Then again, few of the regulars are professional moral philosophers.

comment by buybuydandavis · 2015-07-16T00:18:49.075Z · LW(p) · GW(p)

which found that professional moral philosophers are no less subject to the effects of framing and order of presentation

I think some people are missing the issue. It's not that they have a problem with the Trolley Problem, but that their answers vary according to irrelevant framing effects like order of presentation.

comment by ChristianKl · 2015-07-15T19:02:20.249Z · LW(p) · GW(p)

I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily

Where did that assumption come from?

Physics professors have no such problem. Philosophy professors, however, are a different story.

If you ask physics professors questions that go counter to human intuition I wouldn't be to sure that they get them right either.

Replies from: shminux
comment by Shmi (shminux) · 2015-07-15T19:06:53.619Z · LW(p) · GW(p)

Where did that assumption come from?

This assumption comes from expecting an expert to know the basics of their field.

If you ask physics professors questions that go counter to human intuition I wouldn't be to sure that they get them right either.

A trained physicist's intuition is rather different from "human intuition" on physics problems, so that's unlikely.

Replies from: None, pragmatist, ChristianKl, V_V, eternal_neophyte
comment by [deleted] · 2015-07-15T23:05:16.727Z · LW(p) · GW(p)

You need to do some tweaking of your faith in experts. Experts tend to be effective in fields where they get immediate and tight feedback about whether they're right or wrong. Physics has this, philosophy does not. You should put significantly LESS faith in experts from fields where they don't have this tight feedback loop.

Replies from: shminux
comment by Shmi (shminux) · 2015-07-15T23:47:11.448Z · LW(p) · GW(p)

That's a good point. I'll continue discounting anything ancient and modern moral philosophers say, then. From Aristotle to Peter Singer, they are full of it, by your criteria.

Replies from: None, None
comment by [deleted] · 2015-07-16T02:23:39.957Z · LW(p) · GW(p)

Heh, you're right. I suppose I didn't specify sufficient criteria.

I think that philosophers who have stood the test of time have already undergone post-hoc feedback. Aristotle, Nietzsche,Hume etc all had areas of faulty reasoning, but for the most part this has been teased out and, brought to light, and is common knowledge now. All of them were also exceptionally talented and gifted, and made better arguments than the norm. The fact that their work HAS stood the test of time is an expert vetting process in itself.

In terms of a random philosophy professor on the street, they haven't gone through this process of post-hoc feedback to nearly the same degree, and likely haven't gotten enough real time feedback to have developed these sorts of rationality processes automatically. Singer perhaps has had a bit more post-hoc feedback simply because he's popular and controversial, but not nearly as much as these other philosophers, and I suspect he still has lots of faulty reasoning to be picked up on :).

comment by [deleted] · 2015-07-16T02:01:26.468Z · LW(p) · GW(p)

Heh, you're right, I suppose I didn't correctly specify that criteria.

The point was, not, "every expert in these fields is untrustworthy". Singer/Aristotle/Nietzsche etc have already been vetted by generations that their thinking is good.

However, the random philsophy professor on the street, you should be far more skeptical of, they haven't gone through that post-hoc feedback process, and they haven't gotten (as much of) the real time feedback that would cause them to get things right merely from their training.

I think in Aristo

comment by pragmatist · 2015-07-17T04:28:36.318Z · LW(p) · GW(p)

This assumption comes from expecting an expert to know the basics of their field.

I wouldn't characterize the failure in this case as reflecting a lack of knowledge. What you have here is evidence that philosophers are just as prone to bias as non-philosophers at a similar educational level, even when the tests for bias involve examples they're familiar with. In what sense is this a failure to "know the basics of their field"?

A trained physicist's intuition is rather different from "human intuition" on physics problems, so that's unlikely.

A relevantly similar test might involve checking whether physicists are just as prone as non-physicists to, say, the anchoring effect, when asked to estimate (without explicit calculation) some physical quantity. I'm not so sure that a trained physicist would be any less susceptible to the effect, although they might be better in general at estimating the quantity.

Take, for instance, evidence showing that medical doctors are just as susceptible to framing effects in medical treatment contexts as non-specialists. Does that indicate that doctors lack knowledge about the basics of their fields?

I think what this study suggests is that philosophical training is no more effective at de-biasing humans (at least for these particular biases) than a non-philosophical education. People have made claims to the contrary, and this is a useful corrective to that. The study doesn't show that philosophers are unaware of the basics of their field, or that philosophical training has nothing to offer in terms of expertise or problem-solving.

comment by ChristianKl · 2015-07-15T22:57:31.359Z · LW(p) · GW(p)

This assumption comes from expecting an expert to know the basics of their field.

There quite a difference between knowing basic on system II level and being able to apply it on system I.

comment by V_V · 2015-07-15T23:26:57.525Z · LW(p) · GW(p)

A trained physicist's intuition is rather different from "human intuition" on physics problems, so that's unlikely.

So if you were to poll physicists about, say, string theory vs. quantum loop gravity, or about the interpretations of quantum mechanics, do you think there would be no order or framing effects? That would be quite surprising to me.

Replies from: shminux
comment by Shmi (shminux) · 2015-07-15T23:41:58.839Z · LW(p) · GW(p)

I didn't realize that identifying "200 out 600 people die" with "400 of 600 people survive" requires quantum gravity-level expertize.

Replies from: None
comment by [deleted] · 2015-07-17T05:51:32.878Z · LW(p) · GW(p)

Maybe they just thought about it in vaguely Carrollian way, like 'if 200 of 600 people die, then we cannot say anything about the state of the other 400, because no information is given on them'?

comment by eternal_neophyte · 2015-07-15T19:25:19.326Z · LW(p) · GW(p)

Is every philosopher supposed to be a moral philosopher?

Edit: Just noticed study contains this (which I missed in the OP):

Nor were order effects any smaller for the minority of philosopher participants reporting expertise on the very issues under investigation.

...which is pretty disconcerting. However asking people to determine for themselves whether they're experts in a particular problem area doesn't strike me as particularly hygienic.

comment by TheAncientGeek · 2015-07-17T18:06:42.561Z · LW(p) · GW(p)

So here's an article linking the poor thinking of philosophers with another study showing unscientific thought by scientists....

Teleological thinking, the belief that certain phenomena are better explained by purpose than by cause, is not scientific. An example of teleological thinking in biology would be: “Plants emit oxygen so that animals can breathe.” However, a recent study (Deborah Kelemen and others, Journal Experimental Psychology, 2013) showed that there is a strong, but suppressed, tendency towards teleological thinking among scientists, which surfaces under pressure.

Eighty scientists plus control groups were presented with 100 one-sentence statements and asked to answer true or false. Some of the statements were teleological, as in the example quoted above. Half had to answer within three seconds, while others had as long as they liked to answer.

The scientists endorsed fewer teleological statements than the controls (22 per cent versus 50 per cent). But when they were rushed, the scientists endorsed 29 per cent of the teleological statements compared with 15 per cent endorsed by unrushed scientists. This study seems to show that a teleological tendency is a resilient and enduring feature of the human mind.

The under pressure qualification is really important. Its known that people don't fire on all cylinders under pressure ... its one of the bases of Derren Brown style Dark arts. Scientists and philosophers, unlike ER room doctors or soldiers, don't produce their proffessional results as pressured individuals. The results are psychologically interesting, bug have no bearing on how well anyone is doing their job.

ETA

The under pressure qualification is really important. Its known that people don't fire on all cylinders under pressure ... its one of the bases of Derren Brown style Dark arts. Scientists and philosophers, unlike ER room doctors or soldiers, don't produce their proffessional results as pressured individuals. The results are psychologically interesting, bug have no bearing on how well anyone is doing their job.

Replies from: Vaniver
comment by Vaniver · 2015-07-17T18:17:29.710Z · LW(p) · GW(p)

So what you're saying is that 60% of the reduction in magical thinking that scientists show compared to the general population is at the 3 second level?

That... seems pretty impressive to me, but I'm not sure what I would have expected it to be.


Remember that you need to put a > in front of each paragraph to do a blockquote in comments.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-19T15:06:19.339Z · LW(p) · GW(p)

There are four elephant in the room issues surrounding ratiinality.

1 [Rationality is more than one thing];

2 Biases are almost impossible to overcome;

3 Confirmation bias is adaptive to group discussion

4 If biases are so harmful, why don't they get selected out?

If biases are so harmful, why don't they get selected out?

We have good reason to believe that many biases are the results of cognitive shortcuts designed to speed up decisions making, but not in all cases. Mercier and Speaker's Argumentative Theory of Rationality suggests that confirmation bias is an adaptation to arguing things out in groups: that's why people adopt a single point of view, and stick to it in the face of almost all opposition. You don't get good quality discussion from a bunch if people saying There are Arguments on Both Sides,

"Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others' arguments. M&S also plead for the "rehabilitation" of confirmation bias as playing an adaptive, useful role in the production of arguments in favor of an intuitively preferred view."

Societies have systems and structures in place for ameliorating and leveraging confirmation bias. For instance, replication and off crosschecking in science ameliorate the tendency of research groups succumb to bias. Adversarial legal processes and party politics leverage the tendency, in ordered get good arguments made for both sides of a question. Values such as speaking ones mind (as opposed to agreeing with leaders), offering and accepting criticism also support rationality.

Now, teaching rationality, in the sense of learning to personally overcome bias has a problem in that it may not be possible to do fully, and it has a further problem in that it may not be a good idea. Teaching someone to overcome confirmation bias in a sense , to see two or more sides to the story, is, in a sense, teaching them to internalise the process of argument, to be solo rationalists. And while society perhaps needs some people like these, it perhaps also doesn't need many. Forms of solo rationality training have existed for a long time, eg philosophy, but they are do most suit a lot of people's preferences, and not a lot of people can succeed at them, since they are cognitively difficult

If you plug solo ration Ists into systems designed for the standard human, you are likely to get an impedance mismatch, not improved rationality. If you wanted to increase overall rationality by increasing average rationality, assuming that is feasible in the first place, you would have to redesign systems. But you could probably increase overall rationality by improving systems anyway...we live in a world where medicine, lf all things, isnt routinely based on good quality evidence

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-20T11:28:51.605Z · LW(p) · GW(p)

Some expansion of point 4 If biases are so harmful, why don't they get selected out?

"During the last 25 years, researchers studying human reasoning and judgment in what has become known as the "heuristics and biases" tradition have produced an impressive body of experimental work which many have seen as having "bleak implications" for the rationality of ordinary people (Nisbett and Borgida 1975). According to one proponent of this view, when we reason about probability we fall victim to "inevitable illusions" (Piattelli-Palmarini 1994). Other proponents maintain that the human mind is prone to "systematic deviations from rationality" (Bazerman & Neale 1986) and is "not built to work by the rules of probability" (Gould 1992). It has even been suggested that human beings are "a species that is uniformly probability-blind" (Piattelli-Palmarini 1994). This provocative and pessimistic interpretation of the experimental findings has been challenged from many different directions over the years. One of the most recent and energetic of these challenges has come from the newly emerging field of evolutionary psychology, where it has been argued that it's singularly implausible to claim that our species would have evolved with no "instinct for probability" and, hence, be "blind to chance" (Pinker 1997, 351). Though evolutionary psychologists concede that it is possible to design experiments that "trick our probability calculators," they go on to claim that "when people are given information in a format that meshes with the way they naturally think about probability,"(Pinker 1997, 347, 351) the inevitable illusions turn out to be, to use Gerd Gigerenzer memorable term, "evitable" (Gigerenzer 1998). Indeed in many cases, evolutionary psychologists claim that the illusions simply "disappear" (Gigerenzer 1991)." http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/Publications/Wars/wars.html

comment by Lumifer · 2015-07-15T19:39:12.489Z · LW(p) · GW(p)

Heh

What is going on?

I think you're committing the category error of treating philosophy as science :-D

Replies from: None, shminux
comment by [deleted] · 2015-07-16T01:40:55.312Z · LW(p) · GW(p)

Heh

Nitpick: Sarunas mentioned it first

Replies from: Lumifer
comment by Lumifer · 2015-07-16T02:01:33.691Z · LW(p) · GW(p)

Yep.

So three people independently posted the same thing to LW: first as a comment in some thread, then as a top-level comment in the open thread, and finally as a post in Discussion :-)

Replies from: Pfft, None
comment by Pfft · 2015-07-17T15:07:33.288Z · LW(p) · GW(p)

Coming up: the post is promoted to Main; it is re-released as a MIRI whitepaper; Nick Bostrom publishes a book-length analysis; The New Yorker features a meandering article illustrated by a tasteful watercolor showing a trolly attacked by a Terminator.

Replies from: Lumifer
comment by Lumifer · 2015-07-17T15:21:36.298Z · LW(p) · GW(p)

Followed by a blockbuster movie where Hollywood kicks the tasteful watercolor to the curb and produces an hour-long battle around the trolley between a variety of Terminators, Transformers, and X-Men, led by Shodan on one side and GlaDOS on the other, while in a far-off Tibetan monastery the philosophers meditate on the meaning of the word mu.

comment by [deleted] · 2015-07-16T17:59:05.954Z · LW(p) · GW(p)

Yes, that is funny. I'm glad the paper is garnering attention, as I think it's a powerful reminder that we are ALL subject to simple behavioral biases.

I reject the alternative explanation that philosophy and philosophers are crackpots.

comment by Shmi (shminux) · 2015-07-15T22:03:52.712Z · LW(p) · GW(p)

Take a field that requires a PhD to work in, purports to do research, has multiple journals with peer-reviewed publications, runs multiple conferences... would you characterize a field like that as art or science?

Replies from: Randaly, Lumifer
comment by Randaly · 2015-07-17T10:37:52.379Z · LW(p) · GW(p)

All of these are plausibly true of art departments at universities as well. (The first two are a bit iffy.)

comment by Lumifer · 2015-07-15T23:19:08.204Z · LW(p) · GW(p)

Take a field that requires a PhD to work in, purports to do research, has multiple journals with peer-reviewed publications, runs multiple conferences...

Let me remind you of the Feynman's description:

I think the educational and psychological studies I mentioned are examples of what I would like to call cargo cult science. In the South Seas there is a cargo cult of people. During the war they saw airplanes with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head to headphones and bars of bamboo sticking out like antennas--he's the controller--and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.

As to

would you characterize a field like that as art or science?

Neither. I would call it mental masturbation.

comment by V_V · 2015-07-17T14:27:34.423Z · LW(p) · GW(p)

Framing effect in math:

"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" — Jerry Bona
This is a joke: although the three are all mathematically equivalent, many mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn's lemma to be too complex for any intuition.

Replies from: gjm
comment by gjm · 2015-07-17T15:32:48.552Z · LW(p) · GW(p)

It might be worth saying explicitly what these three (equivalent) axioms say.

  • Axiom of choice: if you have a set A of nonempty sets, then there's a function that maps each element a of A to an element of a. (I.e., a way of choosing one element f(a) from each set a in A.)
  • Well-ordering principle: every set can be well-ordered: that is, you can put a (total) ordering on it with the property that there are no infinite descending sequences. E.g., < is a well-ordering on the positive integers but not on all the integers, but you can replace it with an ordering where 0 < -1 < 1 < -2 < 2 < -3 < 3 < -4 < 4 < ... which is a well-ordering. The well-ordering principle implies, e.g., that there's a well-ordering on the real numbers, or the set of sets of real numbers.
  • Zorn's lemma: if you have a partially ordered set, and every subset of it on which the partial order is actually total has an upper bound, then the whole thing has a maximal element.

The best way to explain what Zorn's lemma is saying is to give an example, so let me show that Zorn's lemma implies the ("obviously false") well-ordering principle. Let A be any set. We'll try to find a well-ordering of it. Let O be the set of well-orderings of subsets of A. Given two of these -- say, o1 and o2 -- say that o1 <= o2 if o2 is an "extension" of o1 -- that is, o2 is a well-ordering of a superset of whatever o1 is a well-ordering of, and o1 and o2 agree where both are defined. Now, this satisfies the condition in Zorn's lemma: if you have a subset of O on which <= is a total order, this means that for any two things in the subset one is an extension of the other, and then the union of all of them is an upper bound. So if Zorn's lemma is true then O has a maximal element, i.e. a well-ordering of some subset of A that extends every possible well-ordering of any subset of A. Call this W. Now W must actually be defined on the whole of A, because for every element a of A there's a "trivial" well-ordering of {a}, and W must extend this, which requires a to be in W's domain.

(A few bits of terminology that I didn't digress to define above. A total ordering on a set is a relation < for which if a<b and b<c then a<c, and for which exactly one of a<b, b<a, a=b holds for any a,b. OR a relation <= for which if a<=b and b<=c then a<=c, and for which for any a,b either a<=b or b<=a, and for which a<=b and b<=a imply a=b. A partial ordering is similar except that you're allowed to have pairs for which a<b and b<a (OR: a<=b and b<=a) both fail. We can translate between the "<" versions and the "<=" versions: "<" means "<= but not =", or "<=" means "< or =". Given a partial ordering, an upper bound for a set A is an element b for which a<=b for every a in A. A maximal element in a partially ordered set is an element of the set that's an upper bound for the whole set.)

comment by pjando · 2015-07-16T23:57:09.684Z · LW(p) · GW(p)

This doesn't really bother me. Philosophers' expertise is not in making specific moral judgements, but in making arguments and counterarguments. I think that is a useful skill that collectively gets us closer to the truth.

comment by TheAncientGeek · 2015-07-16T09:24:33.273Z · LW(p) · GW(p)

Do you think there is a right answer to the Trolley problem?

Replies from: gjm, Zubon
comment by gjm · 2015-07-16T12:02:35.541Z · LW(p) · GW(p)

I'll let shminux answer that, but it's worth pointing out that the answer doesn't need to be yes for the results in this paper to indicate a problem. The point isn't that they gave bad answers, it's that their answers were strongly affected by demonstrably irrelevant things.

Unless your carefully considered preference between one death caused by you and five deaths not caused by you in the trolley scenario is that which happens should depend on whether you were asked about some other scenario first, or that which happens should depend on exactly how the situation was described to you, then something is wrong with your thinking if you give the answers the philosophers did, even if your preferences are facts only about you and not about any sort of external objective moral reality.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-16T12:47:13.611Z · LW(p) · GW(p)

And the other issue is that overcoming those biases is regarded as all but impossible by experts in the field of cognitive bias.....but I guess that "philosophers imperfect rationalists, along with everybody else" isnt such a punchy headline,

Replies from: DanArmak, gjm
comment by DanArmak · 2015-07-16T14:55:34.880Z · LW(p) · GW(p)

Whatever the reason, if they cannot overcome it, doesn't that make all their professional output similarly useless?

However, I don't agree with what you're saying; overcoming these biases is very easy. Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.

After all, mathematicians aren't confused by being told "I colored 200 of 600 balls black" and "I colored all but 400 of 600 balls black".

Replies from: TheAncientGeek, None
comment by TheAncientGeek · 2015-07-16T20:18:16.044Z · LW(p) · GW(p)

Whatever the reason, if they cannot overcome it, doesn't that make all their professional output similarly useless?

If no one can overcome bias, does that make all their professional output useless? Do you want to buy "philosophers are crap" at the expense of "everyone is crap"?

However, I don't agree with what you're saying; overcoming these biases is very easy. Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.

That's the consistency. What about the correctness?

Note that biases might affect the meta-level reasoning that leads to the choice of algorithm. Unless you think it's algorithms all the way down.

After all, mathematicians aren't confused by being told "I colored 200 of 600 balls black" and "I colored all but 400 of 600 balls black

Which would make mathematicians the logical choice to solve all real world problems....if only real world problems were as explicitly and unambiguous statable, as free indeterminism , as fee of incomplete information and mess, as math problems.

Replies from: DanArmak, None, None, Luke_A_Somers
comment by DanArmak · 2015-07-17T12:17:07.557Z · LW(p) · GW(p)

If no one can overcome bias, does that make all their professional output useless? Do you want to buy "philosophers are crap" at the expense of "everyone is crap"?

No, for just the reason I pointed out. Mathematicians, "hard" scientists, engineers, etc. all have objective measures of correctness. They converge towards truth (according to their formal model). They can and do disprove wrong, biased results. And they certainly can't fall prey to a presentation bias that makes them give different answers to the same, simple, highly formalized question. If such a thing happened, and if they cared about the question, they would arrive at the correct answer.

That's the consistency. What about the correctness?

Consistency is more important than correctness. If you believe you theory is right, you may be wrong, and if you discover this (because it makes wrong predictions) you can fix it. But if you accept inconsistent predictions from your theory, you can never fix it.

Which would make mathematicians the logical choice to solve all real world problems....if only real world problems were as explicitly and unambiguous statable, as free indeterminism , as fee of incomplete information and mess, as math problems.

A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn't ever be contrary to simple logic.

Replies from: Lumifer, TheAncientGeek
comment by Lumifer · 2015-07-17T18:45:10.502Z · LW(p) · GW(p)

Consistency is more important than correctness.

I think I'm going to disagree with that.

Replies from: DanArmak
comment by DanArmak · 2015-07-17T21:09:50.560Z · LW(p) · GW(p)

Why?

Replies from: Lumifer
comment by Lumifer · 2015-07-18T03:32:47.519Z · LW(p) · GW(p)

Because correct results or forecasts are useful and incorrect are useless or worse, actively misleading.

I can use a theory which gives inconsistent but mostly correct results right now. A theory which is consistent but gives wrong results is entirely useless. And if you can fix an incorrect theory to make it right, in the same way you can fix an inconsistent theory to make it consistent.

Besides, it's trivially easy to generate false but consistent theories.

comment by TheAncientGeek · 2015-07-17T15:30:21.935Z · LW(p) · GW(p)

No, for just the reason I pointed out. Mathematicians, "hard" scientists, engineers, etc. all have objective measures of correctness.

Within their domains.

They can and do disprove wrong, biased results. And they certainly can't fall prey to a presentation bias that makes them give different answers to the same, simple, highly formalized question.

So when kahneman et al tested hard scientists foe presentation bias, they found them, out of the population, to be uniquely free from it? I don't recall hearing that result.

You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it's biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,

A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn't ever be contrary to simple logic.

Even if it's too simple?

Replies from: None, TheAncientGeek, DanArmak
comment by [deleted] · 2015-07-19T03:47:48.485Z · LW(p) · GW(p)

You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it's biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,

Where is the evidence that philosophy, as a field, has converged towards correctness over time?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-19T08:20:03.061Z · LW(p) · GW(p)

Where is the need for it? The question us whether philosophers are doing their jobs competently. Can you fail at something you don't claim to be doing? Do philosophers claim have The Truth?

Replies from: None
comment by [deleted] · 2015-07-20T13:21:35.270Z · LW(p) · GW(p)

Do philosophers claim have The Truth?

That's basically what they're for, yes, and certainly they claim to have more Truth than any other field, such as "mere" sciences.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-20T21:07:04.992Z · LW(p) · GW(p)

Is that what they say?

ETA

Socrates rather famous said the opposite...he only knows that he does not know.

The claim that philosophers sometimes make is that you can't just substitute science for philosophy because philosophy deals with a wider range of problems. But that isnt the same as claiming to have The Truth about them all.

comment by TheAncientGeek · 2015-07-17T17:28:29.387Z · LW(p) · GW(p)

Consistency is more important than correctness.

Consistency shouldn't be regarded as more important than correctness, in the sense that you check for consistency, and stop.

f you believe you theory is right, you may be wrong, and if you discover this (because it makes wrong predictions) you can fix it. But if you accept inconsistent predictions from your theory, you can never fix it..

But the inconsistency isnt in the theory, and, in all likelihood, they are not .running off an explicit theory ITFP.

comment by DanArmak · 2015-07-17T16:54:12.124Z · LW(p) · GW(p)

Within their domains.

Exactly. And if philosophers don't have such measures within their domain of philosophy, why should I pay any attention to what they say?

So when kahneman et al tested hard scientists foe presentation bias, they found them, out of the population, to be uniquely free from it? I don't recall hearing that result.

I haven't checked, but I strongly expect that hard scientists would be relatively free of presentation bias in answering well-formed questions (that have universally agreed correct answers) within their domain. Perhaps not totally free, but very little affected by it. I keep returning to the same example: you can't confuse a mathematician, or a physicist or engineer, by saying "400 out of 600 are white" instead of "200 out of 600 are black".

You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it's biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,

What results has moral philosophy, as a whole, achieved in the long term? What is as universally agreed on as first-order logic or natural selection?

A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn't ever be contrary to simple logic.

Even if it's too simple?

If moral philosophers claim that uniquely of all human fields of knowledge, their requires not just going beyond formal logic but being contrary to it, I'd expect to see some very extraordinary evidence. "We haven't been able to make progress otherwise" isn't quite enough; what are the results they've accomplished with whatever a-logical theories they've built?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-17T18:49:49.462Z · LW(p) · GW(p)

Exactly. And if philosophers don't have such measures within their domain of philosophy, why should I pay any attention to what they say?

The critical question is whether they could have such measures.

You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it's biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,

What results has moral philosophy, as a whole, achieved in the long term? What is as universally agreed on as first-order logic or natural selection?

That's completely beside the point. The point is that you allow that the system cam outperform the individuals in the one case, but not the other.

Replies from: DanArmak
comment by DanArmak · 2015-07-17T21:13:39.493Z · LW(p) · GW(p)

The critical question is whether they could have such measures.

Do you mean they might create such measures in the future, and therefore we should keep funding them? But without such measures today, how do we know if they're moving towards that goal? And what's the basis for thinking it's achievable?

That's completely beside the point. The point is that you allow that the system cam outperform the individuals in the one case, but not the other.

Is there an empirical or objective standard by which the work of moral philosophers is judged for correctness or value, something that can be formulated explicitly? And if not, how can 'the system' converge on good results?

comment by [deleted] · 2015-07-19T03:47:04.618Z · LW(p) · GW(p)

Note that biases might affect the meta-level reasoning that leads to the choice of algorithm. Unless you think it's algorithms all the way down.

Of course it's algorithms all the way down! "Lens That Sees Its Flaws" and all that, remember?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-19T08:25:12.823Z · LW(p) · GW(p)

How is a process of reasoning based on an infinite stack of algorithms concluded in a finite amount of time?

Replies from: jsteinhardt
comment by jsteinhardt · 2015-07-19T19:23:03.210Z · LW(p) · GW(p)

You can stop recursing whenever you have sufficiently high confidence, which means that your algorithm terminates in finite time with probability 1, while also querying each algorithm in the infinite stack with non-zero probability.

Replies from: None
comment by [deleted] · 2015-07-20T13:21:05.039Z · LW(p) · GW(p)

Bingo. And combining that with a good formalization of bounded rationality tells you how deep you can afford to go.

But of course, you're the expert, so you know that ^_^.

comment by [deleted] · 2015-07-17T05:40:53.347Z · LW(p) · GW(p)

Re: everyone is crap

But that is not a problem. Iff everyone is crap, I want to believe that everyone is crap.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-17T16:02:48.265Z · LW(p) · GW(p)

Its a problem, if you want to bash one particular group.

comment by Luke_A_Somers · 2015-07-16T21:51:50.872Z · LW(p) · GW(p)

If no one can overcome bias, does that make all their professional output useless?

My professional input does not depend on bias in moral (or similarly fuzzy) questions. As for other biases, I definitively determine success or failure on a time scale ranging from minutes to weeks.

These are rather different from how a philosopher can operate.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-17T08:36:15.889Z · LW(p) · GW(p)

My professional input does not depend on bias in moral (or similarly fuzzy) questions.

But that doesn't make philosophy uniquely broken. If anything it is the other way around: disciplines that deal with the kind of well-defined abstract problems where biases can't get a grip, are exceptional.

As for other biases, I definitively determine success or failure on a time scale ranging from minutes to weeks.These are rather different from how a philosopher can operate.

"Can operate" was carefully phrased. If the main role of philosophers is to answer urgent object level moral quandaries, then the OP would have pointed out a serious real world problem....but philosophers typically don't do that, they typically engage in long term meta level thought on a variety of topics,

Philosophers can operates in a way that approximates the OP scenario, for instance, when they sit on ethics committees. Of course, they sit alongside society's actual go-to experts on object level ethics, religious professionals, who are unlikely to be less biased.

Philosophers aren't the most biased or most impactive people in society....worry about the biases of politicians, doctors, and financiers.

Replies from: DanArmak, Luke_A_Somers
comment by DanArmak · 2015-07-17T12:19:50.731Z · LW(p) · GW(p)

Philosophers aren't the most biased or most impactive people in society....worry about the biases of politicians, doctors, and financiers.

I can't dismiss politicians, doctors and financiers. I can dismiss philosophers, so I'm asking why should I listen to them.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-17T15:03:24.561Z · LW(p) · GW(p)

You can dismiss philosophy, if it doesn't suit your purposes, but that is not at all the same as the original claim that philosophers are somehow doing their job badly. Dismissing philosophers without dismissing philosophy is dangerous, as it means you are doing philosophy without knowing how. You are unlikely to be less biased, whilst being likely to misunderstand questions, reinvent broken solutions, and so on. Consistently avoiding philosophy is harder than it seems. You are likely be making a philosophical claim when you say scientists and mathematicians converge on truth.

Replies from: DanArmak
comment by DanArmak · 2015-07-17T16:58:59.465Z · LW(p) · GW(p)

You can dismiss philosophy, if it doesn't suit your purposes, but that is not at all the same as the original claim that philosophers are somehow doing their job badly

I didn't mean to dismiss moral philosophy; I agree that it asks important questions, including "should we apply a treatment where 400 of 600 survive?" and "do such-and-such people actually choose to apply this treatment?" But I do dismiss philosophers who can't answer these questions free of presentation bias, because even I myself can do better. Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP's suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don't believe that it's not representative merely because a PHD in moral philosophy sounds very wise.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-17T18:39:41.114Z · LW(p) · GW(p)

I didn't mean to dismiss moral philosophy; I agree that it asks important questions, including "should we apply a treatment where 400 of 600 survive?" and "do such-and-such people actually choose to apply this treatment?" But I do dismiss philosophers who can't answer these questions free of presentation bias,

Meaning you dismiss their output, even though it isnt prepared under those conditions and is prepared under conditions allowing bias reduction, eg by cross checking.

because even I myself can do better.

Under the same conditions? Has that been tested?

Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP's suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don't believe that it's not representative merely because a PHD in moral philosophy sounds very wise.

Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?

Replies from: DanArmak
comment by DanArmak · 2015-07-17T21:09:37.171Z · LW(p) · GW(p)

because even I myself can do better.

Under the same conditions? Has that been tested?

It hasn't been tested, but I'm reasonably confident in my prediction. Because, if I were answering moral dilemmas, and explicitly reasoning in far mode, I would try to follow some kind of formal system, where presentation doesn't matter, and where answers can be checked for correctness.

Granted, I would need some time to prepare such a system, to practice with it. And I'm well aware that all actually proposed formal moral systems go against moral intuitions in some cases. So my claim to counterfactually be a better moral philosopher is really quite contingent.

Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?

Other sciences deal with human fallibility by having an objective standard of truth against which individual beliefs can be measured. Mathematical theories have formal proofs, and with enough effort the proofs can even be machine-checked. Physical, etc. theories produce empirical predictions that can be independently verified. What is the equivalent in moral philosophy?

comment by Luke_A_Somers · 2015-07-17T15:08:45.879Z · LW(p) · GW(p)

So in short, you are answering your rhetorical question with 'no', which rather undermines your earlier point - no, DanArmak did not 'prove too much'.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-07-17T15:46:02.248Z · LW(p) · GW(p)

DanArmak did not 'prove too much'.

Shminux did.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2015-07-17T19:50:49.123Z · LW(p) · GW(p)

If you answer the rhetorical question as 'no' then no, Shminux didn't prove too much either.

comment by [deleted] · 2015-07-19T03:45:55.079Z · LW(p) · GW(p)

Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.

This is roughly the point where some bloody philosopher invokes Hume's Fork, mutters something about meta-ethics, and tells you to fuck off back to the science departments where you came from.

comment by gjm · 2015-07-16T14:04:56.490Z · LW(p) · GW(p)

One might reasonably hope that professional philosophers would be better reasoners than the population at large. That is, after all, a large fraction of their job.

Overcoming these biases completely may well be impossible, but should we really expect that years of training in careful thinking, plus further years of practice, on a population that's supposedly selected for aptitude in thinking, would fail to produce any improvement?

(Maybe we should, either on the grounds that these biases really are completely unfixable or on the grounds that everyone knows academic philosophy is totally broken and isn't either selecting or training for clearer more careful thinking. I think either would be disappointing.)

Replies from: None
comment by [deleted] · 2015-07-19T03:50:39.833Z · LW(p) · GW(p)

Well, if they weren't explicitly trained to deal with cognitive biases, we shouldn't expect that they've magically acquired such a skill from thin air.

comment by Zubon · 2015-07-16T15:36:53.376Z · LW(p) · GW(p)

Yes: what we learn from trolley problems is that human moral intuitions are absolute crap (technical term). Starting with even the simplest trolley problems, you find that many people have very strong but inconsistent moral intuitions. Others immediately go to a blue screen when presented with a moral problem with any causal complexity. The answer is that trolley problems are primarily system diagnostic tools that identify corrupt software behaving inconsistently.

Back to the object level, the right answer is dependent on other assumptions. Unless someone wants to have claimed to have solved all meta-ethical problems and have the right ethical system, "a right answer" is the correct framing rather than "the right answer," because the answer is only right in a given ethical framework. Almost any consequentialist system will output "save the most lives/QALYs."

comment by Thomas · 2015-07-16T14:23:05.936Z · LW(p) · GW(p)

I remember long ago, when somebody wanted to emulate a small routine for adding big numbers, using a crowd of people as an arithmetic unit. The task was simple for everyone in the crowd. Like just add two given numbers from 0 to 9 and report the integer part of the result divided by 10 to the next one in the crowd and remember your result modulo 10.

The crowd was assembled of mathematicians. Still, at every attempt someone made an error, while adding 5 and 7 or something.

comment by TheAncientGeek · 2015-07-19T14:42:47.365Z · LW(p) · GW(p)

Oh, and “94% of professors report that they are above average teachers, ..."

Replies from: shminux
comment by Shmi (shminux) · 2015-07-19T17:34:30.554Z · LW(p) · GW(p)

Sure, but this is a different issue, experts being untrustworthy in a different field, in this case evaluating teaching skills.

comment by [deleted] · 2015-07-19T03:43:32.550Z · LW(p) · GW(p)

I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?

Why do you think it's possible to be an expert at a barely-coherent subject?

comment by TheAncientGeek · 2015-07-17T18:15:43.693Z · LW(p) · GW(p)

So here's an article linking the poor thinking of philosophers with another study showing scientific think by scientists....

Teleological thinking, the belief that certain phenomena are better explained by purpose than by cause, is not scientific. An example of teleological thinking in biology would be: “Plants emit oxygen so that animals can breathe.” However, a recent study (Deborah Kelemen and others, Journal Experimental Psychology, 2013) showed that there is a strong, but suppressed, tendency towards teleological thinking among scientists, which surfaces under pressure.

Eighty scientists plus control groups were presented with 100 one-sentence statements and asked to answer true or false. Some of the statements were teleological, as in the example quoted above. Half had to answer within three seconds, while others had as long as they liked to answer.

The scientists endorsed fewer teleological statements than the controls (22 per cent versus 50 per cent). But when they were rushed, the scientists endorsed 29 per cent of the teleological statements compared with 15 per cent endorsed by unrushed scientists. This study seems to show that a teleological tendency is a resilient and enduring feature of the human mind.

The under pressure qualification is really important. Its known that people don't fire on all cylinders under pressure ... its one of the bases of Derren Brown style Dark arts. Scientists and philosophers, unlike ER room doctors or soldiers, don't produce their proffessional results as pressured individuals. The results are psychologically interesting, bug have no bearing on how well anyone is doing their job.

comment by [deleted] · 2015-07-16T08:21:09.667Z · LW(p) · GW(p)

On the other hand, in the last 100-120 years very few interesting philosophy was produced by non-professors. My favorites are Thomas Nagel, Philippa Foot etc. are/were all profs. Seems like it is a necessary, but not sufficient condition. Or maybe not as much as a condition as universities being good at recognizing good ones and throwing jobs at them, but they seem to have too many jobs and not enough good candidates.

Replies from: gjm
comment by gjm · 2015-07-16T12:10:41.204Z · LW(p) · GW(p)

It might be necessary for making your philosophical thoughts visible. I dare say Bill Gates has given some thought to philosophical questions. For all I know, he may have had exceptionally clear and original thoughts about them. But I've read books about philosophy by Nagel and Foot and not by Gates because Nagel and Foot have had philosophy books published. Bill Gates probably hasn't had time to write a philosophy book, and would have more difficulty than Nagel and Foot in getting one published by the sort of publisher readers take seriously.

... Actually, maybe Gates is famous enough that he could find a good publisher for anything he wants; I don't know. So maybe choose someone a few notches down in fame and influence, but still exceptionally smart. Random examples: Bill Atkinson (software guy; wrote a lot of the graphics code in the original Apple Macintosh), Thomas Ades (composer; any serious classical music aficionado will know who he is, but scarcely anyone else), Vaughan Jones (Fields-medal-winning mathematician). If any of those had done first-rate philosophical thinking, I bet no one would know.

comment by Gunnar_Zarncke · 2015-07-15T18:55:25.288Z · LW(p) · GW(p)

Why not give a precise title??

Replies from: shminux
comment by Shmi (shminux) · 2015-07-15T19:00:23.001Z · LW(p) · GW(p)

OK, I'll fix that. Just wanted to show the contrast.

comment by V_V · 2015-07-15T23:13:25.402Z · LW(p) · GW(p)

Cosmologist Sean Carroll tweeted a link to a paper from the Harvard Moral Psychology Research Lab, which found that professional moral philosophers are no less subject to the effects of framing and order of presentation on the Trolley Problem than non-philosophers.

Professional physicists are empirically no less likely to fail to solve quantum gravity than non-physicists.

This seems as basic an error as, say, confusing energy with momentum, or mixing up units on a physics test.

No it does not. The trolley problem is a genuinely hard problem with no generally accepted satisfactory solution.

Replies from: None
comment by [deleted] · 2015-07-16T00:53:52.776Z · LW(p) · GW(p)

They weren't testing for the ability to solve the trolley problem. They were testing for framing effects. You can't test for framing effects if everybody gives the same answer, so they had to use an unsolved problem to test for the solved problem.

Replies from: V_V
comment by V_V · 2015-07-17T14:20:32.344Z · LW(p) · GW(p)

But if you were to test physicists on an unsolved physics problem, would you detect no framing effects? This seems not obvious to me.

Replies from: gjm, None
comment by gjm · 2015-07-17T15:46:51.684Z · LW(p) · GW(p)

I bet you would. It wouldn't have to be an unsolved problem; one to which they couldn't too-quickly work out the answer would suffice. The sort of problem you'd need would be one for which there's a plausible-seeming argument for each of two conclusions -- e.g., the "Feynman sprinkler" problem -- and then you'd frame the question so as to suggest one or other of the arguments.

But it would be disappointing and surprising if physics professors turned out to do no better at such questions than people with no training in physics.

(If you make the question difficult enough and give them little enough time, that might happen. Maybe the Feynman sprinkler problem with 30 seconds' thinking time would do. Question: How closely analogous is this to the trolley problem for philosophers? Question: If you repeat the study we're describing here but encourage the philosophers to spend several minutes thinking about each question, do the framing effects decrease a lot? More or less than for people who aren't professional philosophers?)