A reply to Mark Linsenmayer about philosophy

post by lukeprog · 2013-01-05T11:25:25.242Z · LW · GW · Legacy · 68 comments

Mark Linsenmayer, one of the hosts of a top philosophy podcast called The Partially Examined Life, has written a critique of the view that Eliezer and I seem to take of philosophy. Below, I respond to a few of Mark's comments. Naturally, I speak only for myself, not for Eliezer.

 

 

I'm generally skeptical when someone proclaims that "rationality" itself should get us to throw out 90%+ of philosophy...

Sturgeon's Law declares that "90% of everything is crap." I think something like that is true, though perhaps it's 88% crap in physics, 99% crap in philosophy, and 99.99% crap on 4chan.

But let me be more precise. I do claim that almost all philosophy is useless for figuring out what is true, for reasons explained in several of my posts:

Mark replies that the kinds of unscientific philosophy I dismiss can be "useful at least in the sense of entertaining," which of course isn't something I'd deny. I'm just trying to say that Heidegger is pretty darn useless for figuring out what's true. There are thousands of readings that will more efficiently make your model of the world more accurate.

If you want to read Heidegger as poetry or entertainment, that's fine. I watch Game of Thrones, but not because it's a useful inquiry into truth.

Also, I'm not sure what it would mean to say we should throw out 90% of philosophy because of rationality, but I probably don't agree with the "because" clause, there.

 

 

[Luke's] accusation is that most philosophizing is useless unless explicitly based on scientific knowledge on how the brain works, and in particular where intuitions come from... [But] to then throw out the mass of the philosophical tradition because it has been ignorant of [cognitive biases] is [a mistake].

I don't, in fact, think that "most philosophizing is useless unless explicitly based on scientific knowledge [about] how the brain works," nor do I "throw out the mass of the philosophical tradition because it has been ignorant of [cognitive biases]." Sometimes, people do pretty good philosophy without knowing much of modern psychology. Look at all the progress Hume and Frege made.

What I do claim is that many specific philosophical positions and methods are undermined by scientific knowledge about how brains and other systems work. For example, I've argued that a particular kind of philosophical analysis, which assumes concepts are defined by necessary and sufficient conditions, is undermined by psychological results showing that brains don't store concepts that way.

If some poor philosopher doesn't know this, because she thinks it's okay to spend all day using her brain to philosophize without knowing much about how brains work, she might spend several years of her career pointlessly trying to find a necessary-and-sufficient-conditions analysis of knowledge that is immune to Gettier-style counterexamples.

That's one reason to study psychology before doing much philosophy. Doing so can save you lots of time.

Another reason to study psychology is that psychology is a significant component of rationality training (yes, with daily study and exercise, like piano training). Rationality training is important for doing philosophy because philosophy needs to trust your rationality even though it shouldn't.

 

 

...Looking over Eliezer's site and Less Wrong... my overall impression is again that... none of this adds up to the blanket critique/world-view that comes through very clearly

Less Wrong is a group blog, so it doesn't quite have its own philosophy or worldview.

Eliezer, however, most certainly does. His approach to epistemology is pretty thoroughly documented in the ongoing, book-length sequence Highly Advanced Epistemology 101 for Beginners. Additional parts of his "worldview" comes to light in his many posts on philosophy of language, free will, metaphysics, metaethics, normative ethics, axiology, and philosophy of mind.

I've written less about my own philosophical views, but you can get some of them in two (ongoing) sequences: Rationality and Philosophy and No-Nonsense Metaethics.

 

 

I think it's instructive to contrast Eliezer with David Chalmers... who is very much on top of the science in his field... and yet he is not on board with any of this "commit X% of past philosophy to the flames" nonsense, doesn't think metaphysical arguments are meaningless or that difficult philosophical problems need to be defined away in some way, and, most provocatively, sees in consciousness a challenge to a physicalist world-view... I respectfully suggest that while reading more in contemporary science is surely a good idea... the approach to philosophy that is actually schooled in philosophy a la Chalmers is more worthy of emulation than Eliezer's dismissive anti-philosophy take.

Chalmers is a smart dude, a good writer, and fun to hang with. But Mark doesn't explain here why it's "nonsense" to propose that truth-seekers (qua truth-seekers) should ignore 99% of all philosophy, why many metaphysical arguments aren't meaningless, why some philosophical problems can't simply be dissolved, nor why Chalmers' approach to philosophy is superior to Eliezer's.

And that's fine. As Mark wrote, "I intended this post to be a high-level overview of positions." I'd just like to flag that arguments weren't provided in Mark's post.

Meanwhile, I've linked above to many posts Eliezer and I have written about why most philosophy is useless for truth-seeking, why some metaphysical arguments are meaningless, and why some philosophical problems can be dissolved. (We'd have to be more specific about the Chalmers vs. Eliezer question before I could weigh in. For example, I find Chalmers' writing to be clearer, but Eliezer's choice of topics for investigation more important for the human species.)

Finally, I'll note that Nick Bostrom takes roughly the same approach to philosophy as Eliezer and I do, but Nick has a position at Oxford University, publishes in leading philosophy journals, and so on. On philosophical method, I recommend Nick's first professional paper, Predictions from Philosophy (1997). It sums up the motivation behind much of what Nick and Eliezer have done since then.

68 comments

Comments sorted by top scores.

comment by NancyLebovitz · 2013-01-05T14:49:58.271Z · LW(p) · GW(p)

If Sturgeon's Law is true, then 90% of psychology is crap-- possibly higher because it's very tempting to publish fascinating generalizations-- what implications does this have for philosophy?

Replies from: khafra
comment by khafra · 2013-01-07T15:29:44.996Z · LW(p) · GW(p)

Good question. If knowing psychology only improves philosophy by 10%, going from .01% useful to .011% useful, that's big but not revolutionary. If, on the other hand, it eliminates 10% of philosophical abject failurs, going from .01% useful to 89.99% useful is damned impressive.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-07T15:55:27.398Z · LW(p) · GW(p)

I was looking at the question from a much less optimistic angle. Sturgeon's Law was originally a defense of science fiction against people who'd mock the worst of the field as though it was the whole story. (Science fiction used to be much less respectable than it is now.)

However, works of art are separable from each other. Intellectual disciplines are much more entangled.

If psychology is proposed as a tool for improving philosophy but a great deal of psychology is fascinating nonsense, your percentages look worse. There's a risk that psychology will damage philosophy rather than improving it.

Also, eliminating abject failure isn't the same thing as making the remainder useful.

comment by Kawoomba · 2013-01-05T17:46:25.870Z · LW(p) · GW(p)

Citing Sturgeon's Law - while the amiable thing to do, charitably offering to resolve the dispute in a general cloud of "we should all improve" - seems a bit like a cop out.

Making a claim such as "90% of philosophy should be thrown out for rationality reasons" (assuming that quote isn't a strawman) singles out philosophy. Defending that using Sturgeon's Law - which in your estimation applies to various fields, if to differing extents - portrays your posts retrospectively in a much more generic and thus less hard hitting and less aimed critique of the current modus operandi of science. Did you really mean to quiet your thunder thus, when you started like Prometheus)?

So while you technically can cite Sturgeon's Law as predicting/explaining your stance, it goes against the spirit of dealing with philosophy specifically.

Replies from: lukeprog
comment by lukeprog · 2013-01-05T20:43:39.470Z · LW(p) · GW(p)

I'm not quite sure what it would mean to say that "90% of philosophy should be thrown out for rationality reasons," but I probably don't endorse that statement.

Anyway, as I said in my post, philosophy isn't alone in being mostly crap, and I wouldn't want people to think I believed otherwise.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-05T21:38:21.903Z · LW(p) · GW(p)

(Was just meaning to paraphrase "I'm generally skeptical when someone proclaims that "rationality" itself should get us to throw out 90%+ of philosophy...")

comment by pragmatist · 2013-01-05T13:19:57.598Z · LW(p) · GW(p)

Philosophy is a pretty varied discipline. It's worth distinguishing philosophy of religion and ethical theory (which, if I'm not mistaken, are the main fields you have researched) from, say, philosophy of science, to which your criticisms don't really apply, and which, I would argue, has been responsible for a number of genuinely valuable advances in recent years.

Look, for instance, at the abstracts from a random issue of The British Journal for the Philosophy of Science (the premier journal in the field), and tell me if the majority seem to be of the kind that are susceptible to your worries. As a suggestive illustration, here are the titles of all the articles in the latest issue:

  • Quantum Theory: A Pragmatist Approach
  • Relational Holism and Humean Supervenience
  • Symplectic Reduction and the Problem of Time in Nonrelativistic Mechanics
  • Selection Biases in Likelihood Arguments
  • Objective Bayesian Calibration and the Problem of Non-convex Evidence
  • Calibration and Convexity

Incidentally, the BJPS has an Editor's Choice section, where certain articles are made freely available, and which I highly recommend.

Replies from: lukeprog
comment by lukeprog · 2013-01-05T13:39:45.194Z · LW(p) · GW(p)

Yup. The most frequently useful parts of philosophy of science are sometimes called "formal epistemology," which I picked out as a particularly useful corner of philosophy in my very first post on Less Wrong.

Another example of useful philosophy is a tiny corner of ethics which works on the problem of "moral uncertainty." Proposed solutions in that domain generally aren't developed with AI in mind, but we (at the Singularity Institute) are going to steal them anyway to see whether they help with Friendly AI theory.

comment by Peterdjones · 2013-01-05T18:00:23.327Z · LW(p) · GW(p)

What I do claim is that many specific philosophical positions and methods are undermined by scientific knowledge about how brains and other systems work.

It would be useful to have a list of such positons which are still taken seriously by anglophone philosophy. FYI Hegel, Heidegger, Berkely and other easy targets generally aren't.

For example, I've argued that a particular kind of philosophical analysis, which assumes concepts are defined by necessary and sufficient conditions, is undermined by psychological results showing that brains don't store concepts that way.

But science gains benefits from using categories that are artficially tidy compared to folk concepts.Thus a tomato is not a scientific vegetable, nor a whale a scientific fish. Why shouldn't philosophy do the same?

Replies from: amcknight
comment by amcknight · 2013-01-05T20:13:41.499Z · LW(p) · GW(p)

According to the PhilPapers survey results, 4.3% believe in idealism (i.e. Berkeley-style reality).

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-04-22T08:54:06.614Z · LW(p) · GW(p)

A lot of those probably aren't advocating subjective idealism. Kantian views ('transcendental idealism'), Platonistic views ('platonic idealism'), Russellian views ('phenomenalism'), and Hegelian views ('objective idealism') are frequently called 'idealism' too. In the early 20th century, 'idealism' sometimes degraded to such an extent that it seemed to mean little more than the declaration 'I hate scientism and I think minds are interesting and weird'.

It's also worth noting that Peter probably had analytic philosophy in mind when he said 'Anglophone'. Most of the idealists in the survey are probably in the continental or historically Kantian tradition.

comment by ChrisHallquist · 2013-01-06T07:49:47.225Z · LW(p) · GW(p)

I'm curious to know what progress you think Hume made. I'm a big fan of Hume the observer of human behavior, Hume the proto-economist, and Hume the critic of religion; I'm less enthusiastic about what you might call Hume's purely philosophical work (problem of induction and so on).

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-06T08:58:30.758Z · LW(p) · GW(p)

Hume was the first to come up with a counterfactual definition of causality, I think.

Replies from: JonathanLivengood
comment by JonathanLivengood · 2013-01-06T10:18:20.587Z · LW(p) · GW(p)

That is being generous to Hume, I think. The counterfactual account in Hume is an afterthought to the first of his two (incompatible) definitions of causation in the Enquiry:

Similar objects are always conjoined with similar. Of this we have experience. Suitably to this experience, therefore, we may define a cause to be an object, followed by another, and where all the objects similar to the first are followed by objects similar to the second. Or in other words where, if the first object had not been, the second never had existed.

As far as I know, this is the only place where Hume offers a counterfactual account of causation, and in doing so, he confuses a counterfactual account with a regularity account. Not promising. Many, many people have tried to find a coherent theory of causation in Hume's writings: he's a regularity theorist, he's a projectivist, he's a skeptical realist, he's a counterfactual theorist, he's an interventionist, he's an inferentialist ... or so various interpreters say. On and on. I think all these attempts at interpreting Hume have been failures. There is no Humean theory to find because Hume didn't offer a coherent account of causation.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-06T12:52:56.693Z · LW(p) · GW(p)

I agree that Hume was not thinking coherently about causality, but the credit for the counterfactual definition still ought to go to him, imo. Are you aware of an earlier attempt along these lines?

Replies from: JonathanLivengood
comment by JonathanLivengood · 2013-01-06T13:56:42.730Z · LW(p) · GW(p)

That question raises a bunch of interpretive difficulties. You will find the expression sine qua non, which literally means "without which not," in some medieval writings about causation. For example, Aquinas rejects mere sine qua non causality as an adequate account of how the sacraments effect grace. In legal contexts today, that same expression denotes a simple counterfactual test for causation -- the "but for" test. One might try to interpret the phrase as meaning "indispensable" when Aquinas and other medievals use it and then deflate "indispensable" of its counterfactual content. However, if "indispensable" is supposed to lack counterfactual significance, then the same non-counterfactual reading could, I think, be taken with respect to that passage in Hume. I don't know if the idea shows up earlier. I wouldn't be surprised to find that it does.

comment by beoShaffer · 2013-01-08T22:44:10.317Z · LW(p) · GW(p)

Another The Partially Examined Life host weighs in his own post.

comment by wedrifid · 2013-01-06T05:15:42.409Z · LW(p) · GW(p)

Mark Linsenmayer, one of the hosts of a top philosophy podcast called The Partially Examined Life

Luke, is this a podcast you recommend for those who are interested in philosophy? Is it 99% crap like philosophy at large or is it closer to the 90% crap?

I ask because I wouldn't mind picking up some philosopher jargon. I find it hard to remember words that don't seem to carve reality at its joints---the words just don't feel salient enough. But they can nevertheless be useful when trying to communicate with people that speak only that language.

Replies from: lukeprog
comment by lukeprog · 2013-01-06T08:54:04.164Z · LW(p) · GW(p)

If you want shorter episodes about the works of living philosophers, you may prefer Philosophy Bites.

comment by Peterdjones · 2013-01-05T18:38:55.211Z · LW(p) · GW(p)

Mark makes a good point here:

Much less does reason itself necessitate a certain whole world view, a way of determining what’s truly important and worth spending time on and what isn’t. Many Chinese folks who mix Confucianism, Taoism, and Buddhism in their personal philosophies understand this. Though these views all have different practical upshots (e.g. re. do you respect state authority and tradition or not?), they all contain insights that have worked well in various contexts, and they are (in some portion of the cultural traditions relevant to this example, anyway) based more on different, non-empirically-verifiable attitudes towards life, rather than (as in some Western creeds) on some alleged matters of fact (which makes it much harder for someone to be a Christian-Jewish-Muslim hybrid, though it can be done).

So being a scientist, even one highly tuned into the latest development in cognitive science, statistics, and the like, does not actually dictate a single overall attitude toward life, a mission, a set of core beliefs. And yet, this is what we see in Eliezer’s attitude as exemplified in this podcast and on Less Wrong, which contains numerous articles on mistakes in reasoning that come from an ignorance of such advances as Bayes’s theorem

TLDR: Science deals with "is". Since there is an is-ought gap, there is still plenty for philosophy to say about "ought".

Replies from: BerryPick6, syllogism
comment by BerryPick6 · 2013-01-05T20:19:37.368Z · LW(p) · GW(p)

TLDR: Science deals with "is". Since there is an is-ought gap, there is still plenty for philosophy to say about "ought".

Except that, as I'm positive you already know, to get out of the is-ought bind all you have to do is specify a goal or desire you have. The ought-statement flows logically from the is-statement (which science tells us about) and the goal/desire statement (which science is getting increasingly good at telling us about).

Replies from: Peterdjones
comment by Peterdjones · 2013-01-05T23:23:12.228Z · LW(p) · GW(p)

No, that's would , not ought. I oughtn't act on all my desires.

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-06T11:31:37.240Z · LW(p) · GW(p)

Could you humor me and and explain the difference?

Replies from: Peterdjones
comment by Peterdjones · 2013-01-06T12:59:23.622Z · LW(p) · GW(p)

I am strugglig witht that, because it is so obvious.We sometimes morally condemn people for acting on their desires. The rapist acts on a desire to rape. We condemn them for it, which is to say we think they ought not to have acted on their desires. So what they ought to do is not synonynous with what they desire to do.

AFAICS, the only way to get confused about this is to take "ought" to mean "instruimentaly ought". But no one who believes in the is-ought gap thinks its about instrumental-ought.

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-06T13:22:10.935Z · LW(p) · GW(p)

I am strugglig witht that, because it is so obvious.

I thank you for taking the time and replying anyway. :)

We sometimes morally condemn people for acting on their desires. The rapist acts on a desire to rape. We condemn them for it, which is to say we think they ought not to have acted on their desires. So what they ought to do is not synonynous with what they desire to do.

What we are condemning isn't their resulting 'ought' statement per se, but rather that their reasoning went awry somewhere along the way.

The reason you shouldn't rape someone is a result of some sort of computation (maybe not the best word for it, but whatever):

Is: What the rapist believes about the world. + Desire/Goal/Value: What a perfectly informed and perfectly rational version of said rapist would value or desire. = Ought: The rapist should not have raped.

When we condemn the rapist, we are trying to show that he has some mistake in his first or second step. but not in the 'ought' statement, which is just the product of the first two statements.

AFAICS, the only way to get confused about this is to take "ought" to mean "instruimentaly ought". But no one who believes in the is-ought gap thinks its about instrumental-ought.

Although this doesn't seem to be where we are disagreeing, I will note that the solution I gave is one I've seen often used to resolve is-ought problems even with instrumental 'oughts'.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-06T14:20:07.359Z · LW(p) · GW(p)

What a perfectly informed and perfectly rational version of said rapist would value or desire

Remember that this is about your claim that science can inform us abut oughts. How can a science conducted by imperfectly rational scientists inform us what desires a perfectly ratioanal agent would have? And while we're on the subject, why would rationality constrain desires?

When we condemn the rapist, we are trying to show that he has some mistake in his first or second step. but not in the 'ought' statement, which is just the product of the first two statements.

When we condemn someone, we are sayig they morally-ought not to have done what they did. You are taking "ought" as if only had an instrumental meaning?

the solution I gave is one I've seen often used to resolve is-ought problems even with instrumental 'oughts'.

Instrumental oughts are the easy case.

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-06T14:34:19.211Z · LW(p) · GW(p)

Remember that this is about your claim that science can inform us abut oughts. How can a science conducted by imperfectly rational scientists inform us what desires a perfectly ratioanal agent would have?

Science can't tell us anything about how to be more rational? Is that your claim?

After breaking down the equation for how one gets to an 'ought' statement, I think it's obvious how science can help us inform our 'oughts'. You seem to agree, more or less, with my assessment of the calculation necessary for reaching 'ought' statements, and since science can tell us things about each of the individual parts of the calculation, it follows that it can tell us things about the sum as well.

And while we're on the subject, why would rationality constrain desires?

Hmm... After thinking about it, it seems more likely that rationality belongs to the 'is' box, and reflectiveness/informativeness belong in the the 'desire/goal' box. Duly noted.

When we condemn someone, we are sayig they morally-ought not to have done what they did. You are taking "ought" as if only had an instrumental meaning?

I'm not sure I understand what you are objecting to.

Replies from: Wei_Dai, Peterdjones
comment by Wei Dai (Wei_Dai) · 2013-01-07T11:48:59.069Z · LW(p) · GW(p)

Science can't tell us anything about how to be more rational? Is that your claim?

I think the claim is that science can't tell us how to become "perfectly rational". Science can certainly tell us how to become "more rational", but only if we already have a specification of what "more rational" is, and just need to figure out how to implement it. I think most of us who are trying to figure out such specifications do not see our work as following the methods of science, but rather more like doing philosophy.

comment by Peterdjones · 2013-01-06T15:16:29.889Z · LW(p) · GW(p)

Science can't tell us anything about how to be more rational?

I was responding to you claim:

"perfectly informed and perfectly rational"

You have shifted the ground from "perfect" to "better".

After breaking down the equation for how one gets to an 'ought' statement, I think it's obvious how science can help us inform our 'oughts'

That's because you are still thinking of an "ought* as an instrumental rule for realising personal values, but in the context of the is-ought divide, it isn't that, it is ethical. You still haven't understood what the issue is about. There are cirrcumstances under which I ought not to do what I desire to do.

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-06T15:27:04.091Z · LW(p) · GW(p)

You have shifted the ground from "perfect" to "better".

The better it gets, the closer it gets to perfect. Eventually, if science can tell us enough about rationality, there's no reason we can't understand the best form of it.

That's because you are still thinking of an "ought* as an instrumental rule for realising personal values, but in the context of the is-ought divide, it isn't that, it is ethical. You still haven't understood what the issue is about.

I'm a Moral Anti-Realist (probably something close to a PMR, a la Luke) so the is-ought problem reduces to either what you've been calling 'instrumental meaning' or to what I'll call 'terminal meaning', as in terminal values.

There's nothing more to it. If you think there is, prove it. I'm going with Mackie on this one.

There are cicsumstances under which I ought not to do what I desire to do.

Yes, like I've said. When your beliefs about the world are wrong, or your beliefs about how best to achieve your desires are wrong, or your beliefs about your values are misinformed or unreflective, then the resulting 'ought' will be wrong.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-06T16:22:30.632Z · LW(p) · GW(p)

Eventually, if science can tell us enough about rationality, there's no reason we can't understand the best form of it.

But you original claim was::

to get out of the is-ought bind all you have to do is specify a goal or desire you have.

You then switched to

perfectly informed and perfectly rational

and then switched again to gradual improvement.

In any case, it sill improving instrimental ratioanility is supposed to do anything at all with regard to ethics.

I'm a Moral Anti-Realist (probably something close to a PMR, a la Luke) so the is-ought problem reduces to either what you've been calling 'instrumental meaning' or to what I'll call 'terminal meaning', as in terminal values.

So? You claim was that there science can solve the is-ought problem. Are you claiming that there is scientific proof of MAR?

There's nothing more to it. If you think there is, prove it.

I have.

But the problem here is your claim that sceince can sol ve the is-ought gap was put forward against the argument that philosophy still has a job to do in discussing "ought" issues. As it turns out, far from proving philosophy to be redundant, you are actually relyig on it (albeit in a surreptittious and unargued way).

Yes, like I've said. When your beliefs about the world are wrong, or your beliefs about how best to achieve your desires are wrong, or your beliefs about your values are misinformed or unreflective, then the resulting 'ought' will be wrong

None of that has anything to do with ethics. You seem to have a blind spot about the subject.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-07T18:16:23.910Z · LW(p) · GW(p)

probably something close to a PMR, a la Luke

I have argued that: PMR doesn't solve the is-ought problem

comment by syllogism · 2013-01-07T09:40:19.368Z · LW(p) · GW(p)

Is there really so much to say about "ought"?

All you can and must do is introduce something you assume as an axiom, and then you reason from there. You can't (by definition) motivate your axioms against some other set, and the reasoning is straight-forward by the standards of most philosophy.

So for instance, Peter Singer's version of utilitarianism is an internally consistent product of some particular minimal set of axioms. If you fiddle with the axioms to re-introduce species-specific morality, okay you'll get different results --- but it won't be terribly hard to reason out what the resulting ethical position is given the axioms.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-08T00:49:46.894Z · LW(p) · GW(p)

Is there really so much to say about "ought"?

Yes. Metaethics is very complex.

You can't (by definition) motivate your axioms against some other set,

I don't see why not. In fact, I also don't see why it should be axioms all the way down. Metaethicisiss often start with a set of first-order ethical intutions. and use those to test axioms.

comment by Peterdjones · 2013-01-05T18:10:15.854Z · LW(p) · GW(p)

I do claim that almost all philosophy is useless for figuring out what is true,

I'll say it again: there is no point in criticising philosophy unless you have (1) a better way of (2) answering the same questions.

ETA:

Mark doesn't explain here why it's "nonsense" to propose that truth-seekers (qua truth-seekers) should ignore 99% of all philosophy,

See above. You need something better.

why many metaphysical arguments aren't meaningless

LP is a known failure, as has been pointed out here innumberable times The burden is on you to justify the LP metaphsics-is-nonsense principle.

, why some philosophical problems can't simply be dissolved,

Mark doesn't have to arge that no problem can be dissolved, since he never claimed that. You probably need to arge that the majority can be dissolved , since you keep citing the proportion of philosophy that is worthless as over 90%. You also probably need to expaln why phils. can't do that, in the teeth of examples of the doing just that (eg Dennett on quaia).

nor why Chalmers' approach to philosophy is superior to Eliezer's.

Consider this: If an amaterur claims to be doing considerably better than an acknowledged domain expert, he is probably suffering from the Dunning-Krueger effect.

Replies from: NancyLebovitz, lukeprog, JonathanLivengood
comment by NancyLebovitz · 2013-01-05T23:35:00.437Z · LW(p) · GW(p)

I'll say it again: there is no point in criticising philosophy unless you have (1) a better way of (2) answering the same questions.

This is probably false. Sometimes you know you have a problem for quite a while before you have a solution.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-05T23:39:11.758Z · LW(p) · GW(p)

What's the problem with philosophy?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-05T23:49:39.510Z · LW(p) · GW(p)

I meant that the general principle that you shouldn't point out problems until you have a solution doesn't seem sound to me.

As for philosophy, I don't know whether it has a problem. I do think that rather little useful has come out of it for a long time, and we could use disciplines of applied philosophy in the same spirit that engineering is a conveyor belt for making math, physics, and chemistry useful.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-05T23:55:20.268Z · LW(p) · GW(p)

I do think that rather little useful has come out of it for a long time,

Is it supposed to be useful?

we could use disciplines of applied philosophy

We have them, eg ethics.

comment by lukeprog · 2013-01-05T20:40:25.124Z · LW(p) · GW(p)

there is no point in criticising philosophy unless you have (1) a better way of (2) answering the same questions.

Um... I'm writing an entire sequence about that, and so is Eliezer...

The burden is on you to justify the LP metaphysics-is-nonsense principle.

One might just as well argue that burden is on metaphysicists, to show that what they're saying is useful. But anyway, I'm not going to play burden of proof tennis. All I was saying in that paragraph is that Eliezer and I have explained our approaches to philosophy at length, and Mark's final paragraph offered only contradictions (of our views) rather than counter-arguments.

Also, neither Eliezer nor I are logical positivists. See here and here.

Replies from: TheAncientGeek, buybuydandavis, Peterdjones
comment by TheAncientGeek · 2015-07-06T12:03:25.753Z · LW(p) · GW(p)

there is no point in criticising philosophy unless you have (1) a better way of (2) answering the same questions.

Um... I'm writing an entire sequence about that, and so is Eliezer...

So you don't gave a better way, strictly speaking, you are in the process of formulating one.... but you are sufficiently confident of success to offer criticism of other approaches in the basis of your expected results?

One might just as well argue that burden is on metaphysicists, to show that what they're saying is useful.

Physicalism is metaphysics,

comment by buybuydandavis · 2013-01-06T00:20:56.248Z · LW(p) · GW(p)

One might just as well argue that burden is on metaphysicists, to show that what they're saying is useful.

That's precisely the proper response to any proposed wonderful activity: show me the payoff.

And don't tell me all the truths you can produce - show the payoff of those truths. Show me what you can do with them, that I might want to have done.

Academia is full of people producing stacks of bits. That activity is very profitable for them, but I fail to see the payoff in many of those bits to anyone else, and in particular, me.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-07T22:00:48.264Z · LW(p) · GW(p)

Perhaps you could begin by demosntrating your own usefulness.

comment by Peterdjones · 2013-01-05T22:26:34.470Z · LW(p) · GW(p)

"Am wrtiing" does not equate to "actually have". You need to write it, answer objections and show that it works.

One might just as well argue that burden is on metaphysicists,

Metaphysicians.

to show that what they're saying is useful.

Also, neither Eliezer nor I are logical positivists

OK. The basis of your claim that metaphysics is not the standard one. What, then, is it? No. Metaphysics is meaningful by default, becuse the default meaning of "meaningful" is "comprehensible to others" which metaphsyics is. (Your tried to shift the debate from "meaningful" to "useful". Don't). There's no debate about whether ichthyology is meaningful. We don't assume by default that academic disciplines are meaningless. The claim that metaphysics is meaningless is extrordinary, so the burden fals on the maker to defend it.

But anyway, I'm not going to play burden of proof tennis.

You have already started. You inititally placed the burden on your opponents. The fact that you are unwilling to justify that manouvre does not mean the burden rests there.

Mark's final paragraph offered only contradictions (of our views) rather than counter-arguments.

I thoought Chalmers was meant as a counterexample -- of scientific philosophy Done Right.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-05T23:44:34.733Z · LW(p) · GW(p)

One might just as well argue that burden is on metaphysicists,

Metaphysicicians.

Metaphysicians.

That being said, I don't know whether 'metaphysicists' or 'metaphysicians' would be better.

Replies from: Alicorn
comment by Alicorn · 2013-01-06T07:21:22.853Z · LW(p) · GW(p)

It should be metaphysicist. They don't practice metamedicine.

Replies from: pragmatist
comment by pragmatist · 2013-01-06T17:57:31.846Z · LW(p) · GW(p)

"Metaphysics" shouldn't really be thought of as a description of the discipline the way, say, metamathematics is a description of a discipline. The name "metaphysics" is basically a historical accident. Aristotle's Metaphysics was called that because it was published after his Physics, not because of any relationship between the content of physics and metaphysics. So while it's true they're not practicing meta-medicine, they're not practicing meta-physics either.

Anyway, according to Wikipedia, both "metaphysicist" and "metaphysician" are correct.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-07T22:13:58.129Z · LW(p) · GW(p)

"Metaphysician" is in Merria-m-Websete, "Metaphysicist" is not.

comment by JonathanLivengood · 2013-01-06T10:36:24.196Z · LW(p) · GW(p)

I'll say it again: there is no point in criticising philosophy unless you have (1) a better way of (2) answering the same questions.

Criticism could come in the form of showing that the questions shouldn't be asked for one reason or another. Or criticism could come in the form of showing that the questions cannot be answered with the available tools. For example, if I ran into a bunch of people trying to trisect an arbitrary angle using compass and straight-edge, I might show them that their tools are inadequate for the task. In principle, I could do that without having any replacement procedure. And yet, it seems that I have helped them out.

Such criticism would have at least the following point. If people are engaged in a practice that cannot accomplish what they aim to accomplish, then they are wasting resources. Getting them to redirect their energies to other projects -- perhaps getting them to search for other ways to satisfy their original aims (ways that might possibly work) -- would put their resources to work.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-06T13:03:51.801Z · LW(p) · GW(p)

Criticism could come in the form of showing that the questions shouldn't be asked for one reason or another

Agreed, in principle. Hoewever, I am waiting for someone to do that in a way that

  • applies to the stated 90%+ of philosophy.

  • is objective and scientific, not just an expression of personal preference

  • avoids the self-undermining problems of LP

. If people are engaged in a practice that cannot accomplish what they aim to accomplish, t

If. Note that there is already a great deal of criticism of particualr schools of philsosphy, and of philosophy in general, within philosophy. Note also that LW is not only lakcing the Something Better, it is also lacing a critique that fulfils the three criteria above.

comment by rasputin · 2013-01-05T22:22:30.841Z · LW(p) · GW(p)

I generally get what you mean

I only experience from my own experience Therefore other people and their experience should only matter insofar as they affect my own

This simple rationality throws away just about all philosophy of morality, because morality is irrelevant

comment by Peterdjones · 2013-01-05T17:54:16.558Z · LW(p) · GW(p)

[Mark;} I'm generally skeptical when someone proclaims that "rationality" itself should get us to throw out 90%+ of philosophy...

[Luke:] Sturgeon's Law declares that "90% of everything is crap.

But the problem is knowing what the 90% is. It isn't clear why your training in ratioanlity should allow you to do better than philosophers, who are also highly trained in rationality.

Replies from: novalis
comment by novalis · 2013-01-06T08:08:45.496Z · LW(p) · GW(p)

I'm pretty sure that one of Luke's articles did address this by noting that whatever training philosophers have in rationality is insufficient to allow them to pass a cognitive reflection test.

Replies from: pragmatist, Peterdjones
comment by pragmatist · 2013-01-06T16:48:15.818Z · LW(p) · GW(p)

This is a pretty tendentious way of phrasing the results of this paper. What does it mean to pass the CRT? Note that in that survey, people with some training in philosophy did significantly better on the CRT than people without any training in philosophy at all levels of educational attainment.

Replies from: novalis
comment by novalis · 2013-01-06T17:29:49.404Z · LW(p) · GW(p)

... but significantly less well than a random Less Wrong meetup or students at MIT, according to Luke's post.

Replies from: pragmatist, BerryPick6
comment by pragmatist · 2013-01-06T17:43:09.572Z · LW(p) · GW(p)

Sure, your average philosophy grad student is not as rational as your average LW meetup attendant or your average MIT student, but the evidence does suggest that they are more rational than your average grad student (assuming the CRT is a reasonably reliable test of rationality, of course). Surely that says something about the benefits of philosophy training. There are pretty strong selection effects associated with attending LW meetups and going to MIT. If that's your bar for passing the CRT, it's a very high bar.

I'd also note that looking at all people with some graduate training in philosophy is casting a pretty wide net. There are a lot of pretty bad philosophy graduate programs out there. I'd be interested in seeing a comparison of, say, students at the top 20 philosophy graduate programs with the groups mentioned in Luke's post. Or indeed of MIT students with some philosophy training and MIT students without such training.

Replies from: novalis
comment by novalis · 2013-01-06T17:45:49.525Z · LW(p) · GW(p)

Surely that says something about the benefits of philosophy training.

Yes, it says that you could do better with either specific rationality training, or engineering training. This is Luke's point.

I'd be interested in seeing a comparison of, say, students at the top 20 philosophy graduate programs with the groups mentioned in Luke's post.

Me too!

Replies from: pragmatist
comment by pragmatist · 2013-01-06T17:50:09.105Z · LW(p) · GW(p)

Yes, it says that you could do better with either specific rationality training, or engineering training.

I don't think you can take MIT student scores on the CRT as an indication of the effect of engineering training on rationality, any more than you could take NYU philosophy grad (NYU has the most prestigious philosophy program) scores as an indication of the effect of philosophy training.

Replies from: novalis, BerryPick6
comment by novalis · 2013-01-06T18:18:07.767Z · LW(p) · GW(p)

Compare MIT's scores to Harvard's or Princeton's -- they've all got super-smart students, but MIT does much better.

Replies from: pragmatist
comment by pragmatist · 2013-01-06T18:36:22.849Z · LW(p) · GW(p)

This is true, but I'm still not sure how you get from this that engineering training is better for rationality improvement than philosophy training, given that Harvard and Princeton students are already well above the baseline score for average undergraduates.

Assuming there is no selection effect (and this is a pretty big assumption), philosophy training raises the CRT score of the average undergraduate from 0.65 to 1.16. Assuming engineering training accounts for the entire CRT score difference between Princeton and MIT students (another big assumption), engineering training raised their average score from 1.63 to 2.18. How am I supposed to draw conclusions from this data about which training is better for rationality?

Replies from: novalis
comment by novalis · 2013-01-07T00:27:21.304Z · LW(p) · GW(p)

I think (a) that there is a probably a big selection effect, and (b) that it is possible that the test used is biased in favor of mathematical rather than generally logical thinking. The CRT is also doesn't include things like noticing when a word is meaningless, which I would think would be one of the most important skills for philosophers. I'm not sure how one would test that.

I think you're right that the data don't show what I had thought. I had thought that professional philosophers did worse than than MIT undergrads, but now it looks like there isn't data about that. I think I was confusing it with the results from professional American judges (almost all graduates of the other program which claims to teach reasoning).

comment by BerryPick6 · 2013-01-06T17:55:41.286Z · LW(p) · GW(p)

NYU has the most prestigious philosophy program

Really? Philosophicalgourmet has told me otherwise. I'm interested in seeing a link or source, since I'm looking at programs ATM.

Replies from: pragmatist
comment by pragmatist · 2013-01-06T18:01:14.645Z · LW(p) · GW(p)

NYU is no. 1 on the Gourmet Report rankings.

Perhaps it's not no. 1 for your particular field.

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-06T18:04:00.961Z · LW(p) · GW(p)

Woah, brain glitch. Sorry.

comment by BerryPick6 · 2013-01-06T17:38:37.595Z · LW(p) · GW(p)

IIRC, it wasn't Luke who pointed this out during the 'Plato Kant...' discussion.

comment by Peterdjones · 2013-01-06T13:09:19.730Z · LW(p) · GW(p)

I would be delighted if you could point me to where the experiment was performed. He wouldn't have based his whole anti-philosophy case on personal opinion, would be?

Replies from: novalis
comment by novalis · 2013-01-06T17:59:09.511Z · LW(p) · GW(p)

Here are the two experiments: MIT students and Less Wrong.

Unfortunately, there are no details on the Less Wrong meetup experiment.

comment by timtyler · 2013-01-05T14:56:56.943Z · LW(p) · GW(p)

I think it's instructive to contrast Eliezer with David Chalmers... who is very much on top of the science in his field. [...]

Pah! To make Yudkowsky look bad - do not contrast with Chalmers!