Requesting advice: Doing Epistemology Right (Warning: Abstract mainstream Philosophy herein)
post by Carinthium · 2013-05-28T13:03:51.269Z · LW · GW · Legacy · 70 commentsContents
70 comments
I have naturally read the material here, but am still not sure how to act on two questions.
1: I've been arguing out the question of Foundationalism v.s Coherentism v.s other similiarly basic methods of justifying knowledge (e.g. infinitism, pragmatism). The discussion left off with two problems for Foundationalism.
a: The Evil Demon argument, particularly the problem of memory. When following any piece of reason, an Evil Demon could theoretically fool my reason into thinking that it had reasoned correctly when it hadn't, or fool my memory into thinking I'd reasoned properly before with reasoning I'd never done. Since a Foundationalist either is a weak Foundationalist (and runs into severe problems) or must discard all but self-evident and incorrigible assumptions (of which memory is not one), I'm stuffed.
(Then again, it has been argued, if a Coherentist were decieved by an evil demon they could be decieved into thinking data coheres when it doesn't. Since their belief rests upon the assumption that their beliefs cohere, should they not discard if they can't know if it coheres or not? The seems to cohere formulation has it's own problem)
b: Even if that's discarded, there is still the problem of how Strong Foundationalist beliefs are justified within a Strong Foundationalist system. Strong Foundationalism is neither self-evident nor incorrigible, after all.
I know myself well enough to know I have an unusually strong (even for a non-rationalist) irrational emotive bias in favour of Foundationalism, and even I begin to suspect I've lost the argument (though some people arguing on my side would disagree). Just to confirm, though- have I lost? What should I do now, either way?
2: What to say on the question of skepticism (on which so far I've technically said nothing)? If I remember correctly Elizier has spoken of philosophy as how to act in the world, but I'm arguing with somebody who maintains as an axiom that the purpose of Philosophy is to find truth, whether useful or useless, in whatever area is under discussion.
3: Finally, how do I speak intelligently on the Contextualist v.s Invariantist problem? I can see in basic that it is an empirical problem and therefore not part of abstract philosophy, but that isn't the same thing as having an answer. It would be good to know where to look up enough neuroscience to at least make an intelligent contribution to the discussion.
70 comments
Comments sorted by top scores.
comment by KnaveOfAllTrades · 2013-05-29T22:02:38.372Z · LW(p) · GW(p)
You need to clarify your intentions/success criteria. :) Here's my What Actually Happened technique to the rescue:
(a) You argued with some (they seem) conventional philosophers on various matters of epistemology.
(b) You asked LessWrong-type philosophers (presumably having little overlap with the aforementioned conventional philosophers) how to do epistemology.
(c) You outlined some of the conventional philosophy arguments on the aforementioned epistemological matters.
(d) You asked for neuroscience pointers to be able to contribute intelligently.
(e) Most of the responses here used LessWrong philosophy counterarguments against arguments you outlined.
(f) You gave possible conventional philosophy countercounterarguments.
This is largely a failure of communication because the counterarguers here are playing the game of LessWrong philosophy, while you've played, in response, the game of conventional philosophy, and the games have very different win conditions that lead you to play past each other. From skimming over the thread, I am as usual most inclined to agree with Eliezer: Epistemology is a domain of philosophy, but conventional philosophers are mostly not the best at—or necessarily the people to go to in order to apprehend—epistemology. However, I realise this is partly a cached response in myself: Wanting to befriend your coursemates and curry favour from teachers isn't an invalid goal, and I'd suspect that in that case you wouldn't be best be served by ditching them. Not entirely, anyway...
Based on your post and its language, I identify at least the three following subqueries that inform your query:
(i) How can I win at conventional philosophy?
(ii) How can I win by my own argumentative criteria?
(iii) How can I convince the conventional philosophers?
Varying the balance of these subqueries greatly affects the best course of action.
If (i) dominates, you need to get good at playing the (language?) game of the other conventional philosophers. If their rules are anything like in my past fights with conventional philosophers, this largely means becoming a beast of the 'relevant literature' so that you can straightblast your opponents with rhetoric, jargon, namedropping, and citations until they're unable to fight back (if you get good enough, you will be able to consistently score first-round knockouts), or so that your depth in the chain of counter^n-arguments bottoms them out and you win by sheer attrition in argumentdropping, even if you take a lot of hits.
If (ii) dominates, you need to identify what will make you feel like you've won. If this is anything like me in my past fights with conventional philosophers, this largely means convincing yourself that while what they say is correct, their skepticism is overwrought and serves little purpose, and that you are superior for being 'useful'.
If (iii) dominates, the approach depends upon of what you're trying to convince them. For example, whether the position of which you're trying to convince them is mainstream or contrarian completely changes your argumentative approach.
In the case of (d), the nature of the requested information is actually relatively clear, but the question arises of what you intend to do with it. Is it to guide your own thinking, or mostly to score points from the other philosophers for your knowledge, or...? If it's for anything other than improving your own arguments by your own standards, I would suggest (though of course you have more information about the philosophers in question) that you reconsider how much of a difference it will make; a lot of philosophers at best ignore and at worst disdain relevant information when it is raised against their positions, so the intuition that relevant information is useful for scoring points might be misguided.
Where you speak of seeing yourself shifting/having shifted and moving away from an old position (foundationalism) or towards a new one (coherentism) and describing your preference for foundationalism as irrational, it seems like you probably should just go ahead and disavow foundationalism. Or at least, it would if I were confident such affiliations were useful; I'm not. See conservation of expected evidence.
Replies from: BerryPick6, Carinthium↑ comment by BerryPick6 · 2013-05-30T13:38:56.850Z · LW(p) · GW(p)
This is an excellent post.
↑ comment by Carinthium · 2013-05-30T13:20:04.801Z · LW(p) · GW(p)
Actually, although I do care about status I am trying to actually consider the truth of the issue primarily. I don't seek truth in this area for any practical purpose, but because I want to know.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T18:18:20.354Z · LW(p) · GW(p)
Give up on justifying answers and just try to figure out what the answers really actually are, i.e., are you really actually inside an Evil Demon or not. Once you learn to quantify the reasoning involved using math, the justification thing will seem much more straightforward when you eventually return to it. Meanwhile you're asking the wrong question. Real epistemology is about finding correct answers, not justifying them to philosophers.
Replies from: Carinthium↑ comment by Carinthium · 2013-05-30T04:22:44.915Z · LW(p) · GW(p)
Judging from things you have said in the past, you are of the view that philosophy is about how to act in the world. Just to make it clear, the discussion is about what is true in the topic area, whether useful or not. Without a justification, I cannot rationally believe in the truth of the senses.
The Foundationalists have argued that probability is off the table because it is either a subjective feeling or an estimation of empirical evidence. Subjective feelings do not make a proper basis for justification, and if probability is based on empirical evidence and empirical evidence is based on probability it doesn't work. The Coherentists conceded on probability and moved on to using "tenability" (I.e believing x provisionally) to justify empirical evidence for fear of accusations of direct circularity.
I don't see any way out of the metaphorical vicious circle- a conception of probability that gives a role to empirical data cannot be used to justify empirical data.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-30T05:17:40.771Z · LW(p) · GW(p)
Without a justification, I cannot rationally believe in the truth of the senses.
Yeah you can. Like, are you wearing socks? Yes, you're wearing socks. People were capable of this for ages before philosophy. That's not about what's useful, it's about what's true. How to justify it is a way more complex issue. But if you lose sight of the fact that you are really actually in real life wearing socks, and reminding you of this doesn't help, you may be beyond my ability to rescue by simple reminders. I guess you could read "The Simple Truth", "Highly Advanced Epistemology 101 for Beginners", and if that's not enough the rest of the Sequences.
Replies from: Carinthium, Carinthium↑ comment by Carinthium · 2013-05-30T13:43:39.428Z · LW(p) · GW(p)
I don't really see the relevance of "The Simple Truth" to this discussion besides its criticism of Coherentism. Next I read "The Useful Idea of Truth" and basically interpreted it as follows:
-The refutation of subjectivism is in that experimental predictions are determined by belief, experimental results are determined by reality.
(Edit: Your discussion of the idea of 'post-utopian' could be considered useful. I'm guessing you would question the way the term justified is being referred to. The Foundationalists and Coherentists each have their own idea of what means to be justified- in the debate, the Foundationalists provisionally define it as what must be true in any possible universe plus what can be rationally inferred without any other starting assumptions from such ("rationally" meaning all rules proven to work based on truths in the former category). The Coherentists define justification according to their web of beliefs. Both are arguing about which side has good reasoning.)
This is clearly circular. You did solve the problem of doubting the senses alone by reference to the difference between experimental predictions and results. That does not solve the problem of doubting induction, doubting the principle of probability, or doubting memory.
As for Tyrell's recommendation of "Where recursive justification hits bottom", in that you appear to me to be a Coherentist. However, the article basically appeals to "How best to achieve things in the world." In the discussion that started it all, we had all agreed to focus on what was true about the subject matter under debate whether useful or not.
I'll keep going, but I don't see anything else that might be relevant.
↑ comment by Carinthium · 2013-05-30T07:31:44.929Z · LW(p) · GW(p)
Going to need some time to go through them- after I have, I'll come back to you with a new reply.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2013-05-30T13:23:32.627Z · LW(p) · GW(p)
I would also recommend "Where recursive justification hits bottom". Maybe start with that one, because it is shorter.
comment by Larks · 2013-05-28T15:02:12.413Z · LW(p) · GW(p)
Externalism is always the answer! Accept that some unlucky people who are in sceptical scenarios would be doomed; but that doesn't mean that you, who are not in a sceptical scenario, are not, even though they're subjectively indistinguishable.
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T01:01:12.823Z · LW(p) · GW(p)
If I said that I would be rightly laughed at. How can I rule it out if it's subjectively indistinguishable?
Replies from: AlexSchell, torekp↑ comment by AlexSchell · 2013-05-29T01:45:10.732Z · LW(p) · GW(p)
You might think that you'd be laughed at, but actually externalism about evidence and knowledge is not an uncommon view in philosophy. Reliabilism, for instance, has it that whether or not you know something is a function of the objective reliability of your perceptual faculties and not merely of their input to your conscious experience. Timothy Williamson has also defended externalism about evidence (in Knowledge and its Limits I think).
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T01:47:45.445Z · LW(p) · GW(p)
I would be laughed at if I made the claim with merely the arguments you gave. I've never seen a decent argument for externalism- all of the ones I have seen are circular in one way or another. I'll look up your sources, but I don't hold out much hope.
Replies from: AlexSchell↑ comment by AlexSchell · 2013-05-29T02:15:33.761Z · LW(p) · GW(p)
I didn't give any arguments -- you're confusing me with Larks. Also, my providing sources is not to be understood as endorsing externalism. I'm not sure about it.
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T02:17:51.233Z · LW(p) · GW(p)
Sorry about that. I'll check it out.
Replies from: Larks↑ comment by Larks · 2013-05-29T23:02:20.831Z · LW(p) · GW(p)
Nor was I in fact making any arguments - I was simply stating the position. It's been a few years since I've studied epistemology, so I wouldn't trust myself to do so. SEP is normally a good bet, and I seem to recall enjoying Nozick (Philosophical Investigations) and the Thermometer Model of Knowledge.
I don't recall being convinced by any of the Externalist models I studied (Relevant Possible Alternatives, Tracking, Reliablism, Causal and Defeasability accounts) but I think something in that ballpark is a good idea. Externalism has been, in general, a very successful philosophical project, in a variety of areas (e.g. content externalism).
Also, I hate to say it, but I think you would be better off ignoring everything that has been said on this thread. LW is good for many things, but its appreciation of academic philosophy is frankly infantile.
↑ comment by torekp · 2013-06-03T22:35:43.573Z · LW(p) · GW(p)
Part of the point of externalism is to change the question -- although it's useful to note that to the extent the original question was framed in the term "knowledge", the question hasn't entirely changed. So, you can't rule the skeptical scenario out, but you don't need to. That sub-question is being abandoned, or at least severely demoted.
I second Larks' recommendation, in another comment, of Nozick's Philosophical Investigations. You can probably google up a summary or review to get a taste.
Replies from: Carinthium↑ comment by Carinthium · 2013-06-04T03:58:11.718Z · LW(p) · GW(p)
All the arguments for changing the question seem to be either pragmatist arguments (pragmatism does not correlate with truth in any event) or basically amount to "Take the existence of the world on faith" (which is no more useful than it is to take anything else on faith).
comment by DSherron · 2013-05-28T20:38:52.538Z · LW(p) · GW(p)
Warning: I am not a philosophy student and haven't the slightest clue what any of your terms mean. That said, I can still answer your questions.
1) Occam's Razor to the rescue! If you distribute your priors according to complexity and update on evidence using Bayes' Theorum, then you're entirely done. There's nothing else you can do. Sure, if you're unlucky then you'll get very wrong beliefs, but what are the odds of a demon messing with your observations? Pretty low, compared to the much simpler explanation that what you think you see correlates well to the world around you. One and zero are not probabilities; you are never certain of anything, even those things you're probably getting used to calling a priori truths. Learn to abandon your intuitions about certainty; even if you could be certain of something, our default intuitions will lead us to make bad bets when certainty is involved, so there's nothing there worth holding on to. In any case, the right answer is understanding that beliefs are always always always uncertain. I'm pretty sure that 2 + 2 = 4, but I could be convinced otherwise by an overwhelming mountain of evidence.
2) I don't know what question is being asked here, but if it has no possible impact on the real world then you can't decide if it's true or false. Look at Bayes' Theorem; if probability (evidence given statement) is equal to probability (evidence) then your final belief is the same as your prior. If there is in principle no excitement you could run which would give you evidence for or against it, then the question is not really a question; knowing it was true or false would tell you nothing about which possible world you live in; it would not let you update your map. It is not merely useless but fundamentally not in the same class of statements as things like "are apples yellow?" or "should machines have legal rights, given "should" referring to generalized human preferences?" If there is an experiment you could run in principle, and knowing whether the statement is true or false would tell you something, then you simply have to refer to Occam's Razor to find your prior. You won't necessarily get an answer that's firmly one way or another, but you might.
3) I'll admit I had to look this up to give an answer. What I found was that there is literally not a question here. Go read A Human's Guide to Words (sequence on LW) to understand why, although I'll give a brief explanation. "Knowledge", the word, is not a fundamental thing. Nowhere is there inscribed on the Almighty Rock of Knowledge that "knowledge" means "justified true belief" or "correctly assigned >90% certainty" or "things the Flying Spaghetti Monster told you." It only has meaning as a symbol that we humans can use to communicate. If I made it clear that I was going to use the phrase "know x" to mean "ate x for breakfast", and then said "I know a chicken biscuit", I would be commiting an error; but that error would have nothing to do with the true meaning of "know". When I say "I know that the earth is not flat", I mean that I have seen pretty strong evidence that the earth really isn't flat, such that for it to be flat would require a severe mental break on my part or other similarly unlikely circumstances. I don't know it with certainty; I don't know anything with certainty. But that's not what "know" means in the minds of most people I speak with, so I can say "I know the world is not flat" and everyone around me gets the right idea. There is no such thing as a correct attribution of knowledge, nor an incorrect one, because knowledge is not a fundamental thing nor sharply defined, but instead it's a fuzzy shape in conceptspace which corresponds to some human intuitions about the world but not to the actual territory. Humans are biased towards concrete true/false dichotomies, but that's not how the real world works. Once you realize that beliefs are probabilities you'll realize how incredibly silly most philosophical discussions of knowledge are.
My quick advice to you in general (so that you can solve future problems like this on your own) is three-fold. First, learn Bayes and keep it close to you at all times. The Twelve Virtues of Rationality are nice for a way to remind yourself what it means to want to actually get the right answer. Second, read A Human's Guide to Words, and in particular play Rationalist Taboo constantly. Play it with yourself before you speak and with others when they use words like "knowledge" or "free will". Do not simply accept a vague intuition; play it until you're certain of what you mean (and it matches what you meant when you first said it), or certain that you have no idea. Pro tip: free will sounds like a pretty simple concept, but you have no idea how to specify it other than that thing that you can feel you have. (And any other specification fails to capture what you or anybody else really want to talk about). Third, and I'm sure some people will disagree here, but... Get the heck out of philosophy. There is almost nothing of value that you'll get from the field. Almost all of it is trash, because there really aren't enough interesting questions that don't require you to actually go out and do /gasp/ science to justify an entire field. Pretty much all the important ones have answers already, although you wouldn't know that by talking to philosophers. Philosophy was worthwhile in Ancient Greece when "philosopher" meant "aspiring rationalist" and human knowledge was at the stage of gods controlling everything, but in the modern day we already have the basic rationalist's toolkit available for mass consumption. Any serious advance made in the Art will come from needing it to do something that the Art you were taught couldn't do for you, and such advances are what philosophy should be, but isn't, providing. You won't find need of a new rationalist Art if you're trying to convince other people, who by definition do not already have this new Art, of some position that you stumbled upon because of other people who argued it convincingly to you. If you care about the human state of knowledge, go into any scientific discipline. Otherwise just pick literally anything else. There's nothing for you in philosophy except for a whole lot of confused words.
Replies from: None, Carinthium, Manfred↑ comment by [deleted] · 2013-05-29T02:54:31.759Z · LW(p) · GW(p)
Ok, response here from somebody who has studied philosophy. I disagree with a lot of what DSherron said, but on one point we agree - don't get a philosophy degree. Take some electives, sure - that'll give you an introduction to the field - but after that there's absolutely no reason to pay for a philosophy degree. If you're interested in it, you can learn just as much by reading in your spare time for FREE. I regret my philosophy degree.
So, now that that's out of the way: philosophy isn't useless. In fact, at its more useful end it blurs pretty seamlessly into mathematics). It's also relevant to cognitive science), and in fact science in general. The only time philosophy is useless is when it isn't being used to do anything. So, sure, pure philosophy is useless, but that's like saying "pure rationality is useless". We use rationality in combination with every other discipline, that's the point of rationality.
As for the OP's questions:
DSherron suggests following the method of the 13th century philosopher William of Ockham, but I don't think that's relevant to the question. As far as I can tell, ALL justificatory systems suffer from Munchausen's Trilemma. Given that, Foundationalism and Coherentism seem to me to be pretty much equivalent. You wouldn't pick incoherent axioms as your foundations, and conversely any coherent system of justifications should be decomposable into an orthogonal set of fundamental axioms and theorems derived thereof. Maybe there's something I'm missing, though.
DSherron's point is a good one. It was first formalised by the philosopher-mathematician Leibniz who proposed the principle of the Identity of Indiscernibles.
DSherron suggests that the LW sequence "A Human's Guide to Words" is relevant here. Since that sequence is basically a huge discussion of the philosophy of language, and makes dozens of philosophical arguments aimed at correcting philosophical errors, I agree that it is a useful resource.
↑ comment by Carinthium · 2013-05-29T12:49:42.993Z · LW(p) · GW(p)
I'm doing a philosophy degree for two reasons. The first is that I enjoy philosophy (and a philosophy degree gives me plenty of opportunities to discuss it with others). The second is that Philosophy is my best prospect of getting the marks I need to get into a Law course. Both of these are fundamentally pragmatic.
1: Any Coherentist system could be remade as a Weak Foundationalist system, but the Weak Foundationalist would be asked why they give their starting axioms special priviledges (hence both sides of my discussion have dissed on them massively).
The Coherentists in the argument have gone to great pains to say that "consistency" and "coherence" are different things- their idea of coherence is complicated, but basically involves judging any belief by how well interconnected it is with other beliefs. The Foundationalists have said that although they ultimately resort to axioms, those axioms are self-evident axioms that any system must accept.
2: Could you clarify this point please? Superficially it seems contradictory (as it is a principle that cannot be demonstrated empirically itself), but I'm presumably missing something.
3: About the basic philosophy of language I agree. What I need here is empirical evidence to show that this applies specifically to the Contextualist v.s Invariantist question.
Replies from: DSherron↑ comment by DSherron · 2013-05-29T15:38:11.649Z · LW(p) · GW(p)
For 1) the answer is basically to figure out what bets you're willing to make. You don't know anything, for strong definitions of know. Absolutely nothing, not one single thing, and there is no possible way to prove anything without already knowing something. But here's the catch; beliefs are probabilities. You can say "I don't know that I'm not going to be burned at the stake for writing on Less Wrong" while also saying "but I probably won't be". You have to make a decision; choose your priors. You can pick ones at random, or you can pick ones that seem like they work to accomplish your real goals in the real world; I can't technically fault you for priors, but then again justification to other humans isn't really the point. I'm not sure how exactly Coherentists think they can arrive at any beliefs whatsoever without taking some arbitrary ones to start with, and I'm not sure how anyone thinks that any beliefs are "self-evident". You can choose whatever priors you want, I guess, but if you choose any really weird ones let me know, because I'd like to make some bets with you... We live in a low-entropy universe; simple explanations exist. You can dispute how I know that, but if you truly believed any differently then you should be making bets left and right and winning against anyone who thought something silly like that a coin would stay 50/50 just because it usually does. Basically, you can't argue anything to an ideal philosopher of perfect emptiness, any more than you can argue anything to a rock. If you refuse to accept anything, then you can go do whatever you want (or perhaps you can't, since you don't know what you want), and I'll get on with the whole living thing over here. You should read "The Simple Truth"; it's a nice exploration of some of these ideas. You can't justify knowledge, at all, and there's no difference between claiming an arbitrary set of axioms and an arbitrary set of starting beliefs (they are literally the same thing), but you can still count sheep, if you really want to. 2) is mostly contained in 1), I think.
3) Why do you need empirical evidence? What could that possibly show you? I guess you could theoretically get a bunch of Contextualista and Invariantists together and show that most of them think that "know" has a fundamental meaning, but that's only evidence that those people are silly. Words are not special. To draw from your lower comment to me, "a trout is a type of fish" is not fundamentally true, linguistically or otherwise. It is true when you, as an English speaker, say it in an English forum, read by English speakers. Is "Фольре є омдни з дівви риб" a linguistic truth? That's (probably) the same sentence in a language picked at random off Google Translate. So, is it true? Answer before you continue reading. Actually, I lied. That sentence is gibberish; I moved the letters around. A native speaker of that language would have told you it was clearly not true. But you had no idea whether it was or wasn't; you don't speak that language, and for that matter neither do I. I could have just written profanity for all I know. But the meanings are not fundamental to the little squiggles on your computer screen; they are in your mind. Words are just mental paintbrush handles, and with them we can draw pictures in each other's minds, similar to those in our own. If you knew that I had had some kind of neurological malfunction such that I associated the word "trout" to a mental image of a moderately sized land-bound mammal, and I said "a trout is a type of fish", you would know that I was wrong (and possibly confused about what fish were). If you told me "a trout is a type of fish", without clarifying that your idea of trout was different from mine, you'd be lying. Words do not have meanings; they are simply convenient mental handles to paint broad pictures in each other's minds. "Know" is exactly the same way. There is no true, really real more real than that other one meaning of "know", just the broad pictures that the word can paint in minds. The only reason anyone argues over definitions is to sneak in underhanded connotations (or, potentially, to demand that they not be brought in). There is no argument. Whatever the Contextualists wants to mean by "know" can be called "to flozzlebait", and whatever the Invariantists wants to mean by it can be called "to mankieinate". There, now that they both understand each other, they can resolve their argument... If there ever even was one (which I doubt).
Replies from: Carinthium↑ comment by Carinthium · 2013-05-30T03:32:51.216Z · LW(p) · GW(p)
1: The Foundationalists have claimed probability is off the metaphorical table- the concept of probability rests either on subjective feeling (irrational) or on empirical evidence(circular, as our belief in empirical evidence rests on the assumption it is probable). They had problems with self-evident, but I created a new definition as "Must be true in any possible universe" (although I'm not sure of the truth of his conclusion, the way EliIizer describes a non-reductionist universe basically claims for reductionism this sort of self-evidency).
2: Doesn't solve the problem I have with it.
3: Of the statement "A trout is a type of fish", the simplification "This statement is true in English" is good enough to describe reality. The invariantist, and likely the contextualist, would claim that universally, across languages, humans have a concept of "knows", however they describe it, which fits their philosophy.
↑ comment by DSherron · 2013-05-29T14:41:48.948Z · LW(p) · GW(p)
You're right, my statement was far too strong, and I hereby retract it. Instead, I claim that philosophy which is not firmly grounded in the real world such that it effectively becomes another discipline is worthless. A philosophy book is unlikely to contain very much of value, but a cognitive science book which touches on ideas from philosophy is more valuable than one which doesn't. The problem is that most philosophy is just attempts to argue for things that sound nice, logically, with not a care for their actual value. Philosophy is not entirely worthless, since it forms the backbone of rationality, but the problem is the useful parts are almost all settled questions (and the ones that aren't are effectively the grounds of science, not abstract discussion). We already know how to form beliefs that work in the real world, justified by the fact that they work in the real world.. We already know how to get to the most basic form of rationality from whence we can then use the tools recursively to improve them. We know how to integrate new science into our belief structure. The major thing which has traditionally been a philosophical question which we still don't have an answer to, namely morality, is fundamentally reduced to an empirical question: what do humans in fact value? We already know that morality as we generally imagine it is a fundamentally a flawed concept, since there are no moral laws which bind us from the outside, but just the fact that we value some things that aren't just us and our tribe. The field is effectively empty of useful open questions (the justification of priors is one of the few relevant ones remaining, but it's also one which doesn't help us in real life much).
Basically, whether philosophers dispute something is essentially un-correlated to whether there is a clear answer on it or not. If you want to know truth, don't talk to a philosopher. If you pick your beliefs based on strength of human arguments, you're going to believe whatever the most persuasive person believes, and there's only weak evidence that that should correlate with truth. Sure, philosophy feeds into rationality and cog-sci and mathematics, but if you want to figure out which parts do so in a useful way, go study those fields. The problem with philosophy as a field is not the questions it asks but the way it answers them; there is no force that drives philosophers to accept correct arguments that they don't like, so they all believe whatever they want to believe (and everyone says that's ok). I mean, anti-reductionism? Epiphenomenalism? This stuff is maybe a little better than religious nonsense, but it still deserves to be laughed at, not taken as a serious opponent. My problem is not the fundamentals of the field, but the way it exists in the real world.
Replies from: Carinthium↑ comment by Carinthium · 2013-05-30T03:43:24.174Z · LW(p) · GW(p)
If you judge philosophy by what helps us in the empirical world, this is mostly correct. The importance of rationality to philosophy (granted the existence of an empirical world) I also agree with. However, some people want to know the true answers to these questions, useful or not. For that, argument is all we've got.
I would mostly agree with rationality training for philosophers, except in that there is something both circular and silly about using empirical data to influence, if indirectly, discussions on if the empirical world exists.
Replies from: DSherron↑ comment by DSherron · 2013-05-30T14:10:07.980Z · LW(p) · GW(p)
Super quick and dirty response: I believe it exists, you believe it exists, and everyone you've ever spoken to believes it exists. You have massive evidence that it exists in the form of memories which seem far more likely to come from it actually existing than any other possibility. Is there a chance we're all wrong (or that you're hallucinating the rest of us, etc.)? Of course. There always is. If someone demands proof that it exists, they will be disappointed - there is no such thing as irrefutable truth. Not even "a priori" logic - not only could you be mistaken, but additionally your thoughts are physical, empirical phenomena, so you can't take their existence as granted while denying the physical world the same status.
If anyone really truly believes that the empirical world doesn't exist, you haven't heard from them. They might believe that they believed it, but to truly believe that it doesn't exist, or even simply that we have no evidence either way and it's therefore a tossup, they won't bother arguing about it (it's as likely to cause harm as good). They'll pick their actions completely at random, and probably die because "eat" never came up on their list. If anyone truly thinks that the status of the physical world is questionable, as a serious position, I'd like to meet them. I'd also like to get them help, because they are clinically insane (that's what we call people who can't connect to reality on some level).
Basically, the whole discussion is moot. There is no reason for me to deny the existence of what I see, nor for you to do so, nor anyone else having the discussion. Reality exists, and that is true, whether or not you can argue a rock into believing it. I don't care what rocks, or neutral judges, or anyone like that believes. I care about what I believe and what other humans and human-like things believe. That's why philosophy in that manner is worthless - it's all about argumentation, persuasion, and social rules, not about seeking truth.
Replies from: Carinthium↑ comment by Carinthium · 2013-06-05T01:15:16.896Z · LW(p) · GW(p)
Your argument is about as valid as "Take it on faith". Unless appealing to pragmatism, your argument is circular in using the belief of others when you can't justifiably assume their existence. Second, your argument is irrational in that it appeals to "Everybody believes X" to support X. Thirdly, a source claiming X to be so is only evidence for X being so if you have reason to consider the source reliable.
You are also mixing up "epistemic order" with "empirical order", to frame two new concepts. "Epistemic order" represents orders of inference- if I infer A from B and B from C, then C is prior to B and B is prior to A in epistemic order regardless of the real-world relation of whatever they are. "Empirical order", of course, represents what is the empirical cause of what (if indeed anything causes anything).
A person detects their own thoughts in a different way from the way they detect their own senses, so they are unrelated in epistemic order. You raise a valid point about assuming that one's thoughts really are one's thoughts, but unless resorting to the Memory Argument (which is part of the Evil Demon argument I discussed) they are at least avaliable as arguments to consider.
The Foundationalist skeptic is arguing that believing in the existence of the world IS IRRATIONAL. Without resorting to the arguments I describe in the first post, there seems to be no way to get around this. Pragmatics clearly isn't one, after all.
↑ comment by Carinthium · 2013-05-29T01:25:13.691Z · LW(p) · GW(p)
1: Occam's Razor has already been covered. The concept inherently rests (unless you take William of Ockham's original version, which cannot be applied in the same way) on empirical observations about the world- which are the things under doubt.
2: The argument started on if it is rational to trust the senses, and turned into an argument about the proper rules to decide that question. Such a question cannot be solved empirically. Besides, such a rule cannot justify itself as it is not empirically rooted.
3: I considered this possibility, but wasn't confident enough to claim it because rarely, despite the nature of human concepts, a simplistic explanation actually works. For example, that "a trout is a type of fish" is true as a linguistic statement, no clarification or deeper understanding of the human mind required.
My mind is good at Verbal Comphrehension skills, such as Philosophy and Law. To get into Law at Melbourne, I need to get good marks. Philosophy is a subject at which I get good marks, and fun because of how my brain works, so I do it. I take a genuine interest because I like the intellectual stimulation and I want to be right about the sort of things philosophy covers.
↑ comment by Manfred · 2013-05-29T01:01:01.762Z · LW(p) · GW(p)
Deferring to a simplicity prior is good for the outside world, but also raises the question of where you got your laws of thought and your assumption of simplicity. At some point you do need to say "okay, that's good enough," because it's always possible to have started from the wrong thoughts.
Explanations aren't first and foremost about what the world is like. They're about what we find satisfying. It's like how people keep trying to explain quantum mechanics in terms of balls and springs - it's not because balls and springs are inherently better, it's because we find them satisfying enough to us that once we explain the world in terms of them we can say "okay, that's good enough."
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T01:10:18.017Z · LW(p) · GW(p)
Philosophical Infinitism in a nutshell (the conclusions, not the argument line which seems unusual as fa as I can tell).
Anyway, the Coherentists would say that you can simply go around in circles for justification (factoring for "webbiness", whilst the Foundationalist skeptics would say that this supports the view that belief in the existence of the world is inherently irrational. Just because something is satisfying doesn't mean it has any correlation with reality.
Replies from: Manfred↑ comment by Manfred · 2013-05-29T02:54:17.688Z · LW(p) · GW(p)
The truth is consistent, but not all consistent things are true. So yeah.
I think the viewpoint that it's not only necessary but okay to have unjustified fundamental assumptions relies on fairly recent stuff. Aristotle could probably tell you why it was necessary (it's just an information-theoretic first cause argument after all), but wouldn't have thought it was okay, and would have been motivated to reach another conclusion.
It's like I said about explanations. Once you know that humans are accidental physical processes, that all sorts of minds are possible, and some of them will be wrong, and that's just how it is, then maybe you can get around to thinking it's okay for us humans, who are after all just smart meat, to just accept some stuff to get started. The reason that we don't fall apart into fundamentally irreconcilable worldviews isn't magic, it's just the fact that we're all pretty similar, having been molded by the constraints of reality.
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T03:43:33.943Z · LW(p) · GW(p)
The problem is that I can't argue based on the existence of the empirical world when that is the very thing the argument is about.
Replies from: Manfred↑ comment by Manfred · 2013-05-29T04:10:34.441Z · LW(p) · GW(p)
That the empirical world exists is a supposition you were born into. The argument is over whether that's satisfying enough to be called an explanation.
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T04:14:34.209Z · LW(p) · GW(p)
The argument is about whether the belief is rational or irrational. Discussing it in the manner you describe is off the point,
Replies from: Manfred, Manfred↑ comment by Manfred · 2013-05-29T17:27:05.767Z · LW(p) · GW(p)
My previous reply wasn't very helpful, sorry. Let me reiterate what I said above: making assumptions isn't so much rational as unavoidable. And so you ask "then, should we believe in the external world?"
Well, this question has two answers. The first is that there is no argument that will convince an agent who didn't make any assumptions that they should believe in an external world. In fact, there is no truth so self-evident it can convince any reasoner. For an illustration of this, see What the Tortoise Said to Achilles. Thus, from a perspective that makes no assumptions, no assumption is particularly better than another.
There is a problem with the first answer, though. This is that "the perspective that makes no assumptions" is the epistemological equivalent of someone with a rock in their head. It's even worse than the tortoise - it can't talk, it can't reason, because it doesn't assume even provisionally that the external world exists or that (A and A->B) -> B. You can't convince it of anything not because all positions are unworthy, but because there's no point trying to convince a rock.
The second answer is that of course you should believe in the external world, and common sense, and all that good stuff. Now, you may say "but you're using your admittedly biased brain to say that, so it's no good," but, I ask you, what else should I use? My kidneys?
If you prefer a slightly more sophisticated treatment, consider different agents interpreting "should we believe in the external world" with different meanings of the word "should". We can call ours human_should, and yes, you human_should believe in the external world. But the word no_assumptions_should does not, in fact, have a definition, because the agent with no assumptions, the guy with a rock in his head, does not assume up any standards to judge actions with. Lacking this alternative, the human_reasonable course of action is to interpret your question as ""human_should we believe in the external world," to which the answer is yes.
Replies from: torekp, Carinthium↑ comment by torekp · 2013-06-03T22:42:46.567Z · LW(p) · GW(p)
The second answer is that of course you should believe in the external world, and common sense, and all that good stuff. Now, you may say "but you're using your admittedly biased brain to say that, so it's no good," but, I ask you, what else should I use? My kidneys?
This is the place to whip out the farmer/directions joke. The one that ends, "you just can't get there from here."
Replies from: Manfred↑ comment by Carinthium · 2013-05-30T00:56:50.230Z · LW(p) · GW(p)
I'd already considered the "What the Tortoise said to Achilles" argument in a different form. I'd gotten around it (I was arguing Foundationalism until now, remember) by redefining self-evident as:
What must be true in any possible universe.
If a truth is self-evident, then a universe where it was false simply COULD NOT EXIST for one reason or another. Elizier has described a non-Reductionist universe the way I believe a legitimate self-evident truth (by this definition) should be described. To those who object, I call it self-evident' (self evident dash, as I say it in normal conversation) and use it instead of self-evident as a basis for justification.
The Foundationalist skeptics in the debate would laugh at your argument, point out you can't even assume the existence of a brain with justification, nor the existence of "should" either in the human sense or any other. Thus your argument falls apart.
Replies from: Manfred↑ comment by Manfred · 2013-05-30T02:13:39.986Z · LW(p) · GW(p)
I agree with the foundationalist skeptics, except for that anything "falls apart" is, of course, something that they just assume without justification, and should be discarded :)
Replies from: Carinthium↑ comment by Carinthium · 2013-06-05T01:07:42.070Z · LW(p) · GW(p)
Self-evident from the definition of rational: It is irrational to believe a proposistion if you have no evidence for or against it.
Empirical evidence is not evidence if you have no reason to trust it. Therefore, the fact that your argument falls apart is self-evident given the premises and conclusions therein.
Replies from: Manfred↑ comment by Manfred · 2013-06-05T04:33:46.477Z · LW(p) · GW(p)
The "definition of rational" is already without foundation - see again What the Tortoise Said to Achilles, and No Universally Convincing Arguments.
Or perhaps I'm overestimating how skeptical normal skepticism is? Is it normal for foundationalist skeptics to say that there's no reason to believe the external world, but that we have to follow certain laws of thought "by definition," and thus be unable to believe the Tortoise could exist? That's not a rhetorical question, I'm pretty ignorant about this stuff.
Replies from: Carinthium↑ comment by Carinthium · 2013-06-05T07:39:23.244Z · LW(p) · GW(p)
I've already gotten past the arguments in those two cases by redefining self-evident by reference to what must be true in any possible universe. Elizier himself describes reductionism in a way which fits my new idea of self-evident. The Foundationalist skeptics agree with me. As for the definition of rational, if you understand nominalism you will see why the definition is beyond dispute.
The Foundationalist Skeptic supports starting from no assumptions except those that can be demonstrated to be self-evident.
Replies from: Manfred↑ comment by Manfred · 2013-06-05T08:54:45.866Z · LW(p) · GW(p)
The Foundationalist Skeptic supports starting from no assumptions except those that can be demonstrated to be self-evident.
So, you agree that the Foundationalist Skeptic rejects the use of modus ponens, since Achilles cannot possibly convince the Tortoise to use it?
Also, I recommend this post. You seem to be roving into that territory. And calling anything, even modus ponens, "beyond dispute" only works within a certain framework of what is disputable - someone with a different framework (the tortoise) may think their framework is beyond dispute. In short, the reflective equilibrium of minds does not have just one stable point.
Replies from: Carinthium↑ comment by Carinthium · 2013-06-05T12:22:07.622Z · LW(p) · GW(p)
Just to remind you, I am not TECHNICALLY arguing for Foundationalist skepticism here. My argument is that it doesn't have any major weaknesses OTHER THAN the ones I've already mentioned.
Regarding the use of modus ponens, that WAS a problem until I redefined self-evident to refer to what must be true in any possible universe. This is a mind-independent definition of self-evident.
I suspect a Foundationalist skeptic shouldn't engage with Elizier's arguments in this case as it appeals to empirical evidence, but leaving that aside the ordinary definition of 'rational' contains a contradiction. In ordinary cases of "rationality", if somebody claims A because of X and are asked "Why should I trust X?", the claimer is expected to have an answer for why X is trustworthy.
The four possible solutions to this are Weak Foundationalism (end up in first causes they can't justify), Infinitism(infinite regress), Coherentism(believe because knowledge coheres), and Strong Foundationalism. This excludes appealing to Common Sense, as Common Sense is both incoherent and commonly considered incompatible with Rationality.
A Weak Foundationalist is challengable on privledging their starting points, plus the fact that any reason they give for privledging said starting points is itself a reason for their starting point and hence another stage back. Infinitism and Coherentism have the problem that without a first cause we have no reason to believe they cohere with reality. This leaves Strong Foundationalism by default.
Replies from: Manfred↑ comment by Manfred · 2013-06-05T13:53:48.513Z · LW(p) · GW(p)
self-evident to refer to what must be true in any possible universe. This is a mind-independent definition of self-evident.
So why doesn't the Tortoise agree that modus ponens is true in any possible universe? Do you have some special access to truth that the Tortoise doesn't? If you don't, isn't this just an unusual Neurathian vessel of the nautical kind?
Replies from: Carinthium↑ comment by Carinthium · 2013-06-05T14:05:55.773Z · LW(p) · GW(p)
What the Tortoise believes is irrelevant. In any universe whatsoever, proper modus ponens will work. Another way of showing is that a universe where it doesn't work would be internally incoherent. Arguments are mind-independent- whether my mind has a special acess to truth or not (theoretically, I may simply have gotten it right this time and this time only), my arguments are just as valid.
Elizier is right to say that you can't argue with a rock. However, insane individuals who disagree in the Tortoise case are irrelevant because the reasoning is not based on universial agreement of first premises but the fact that in any possible universe the premises must be true.
Replies from: Manfred↑ comment by Manfred · 2013-06-06T04:13:04.336Z · LW(p) · GW(p)
I agree - modus ponens works, even though there are some minds who will reject it with internally coherent criteria. Even criteria as simple as "modus ponens works, except when it will lead to belief in the primness of 7 being added to your belief pool" - this definition defends itself because if it was wrong, you could prove 7 was prime, therefore it's not wrong.
You could be put in a room with one off these 7-denialists, and no argument you made could convince them that they had the wrong form of modus ponens, and you had the right one.
But try seeing it from their perspective. To them, 7 not being prime is just how it is. To them, you're the 7-denialist, and they've been put in a room with you, yet are unable to convince you that you have the wrong form of modus ponens, and they have the right one.
Suppose you try to show that a universe where 7 isn't prime is internally inconsistent. What would the proof look like? Well, it would look like some axioms of arithmetic, which you and the 7-denialists share. Then you'd apply modus ponens to these axioms, until you reached the conclusion that 7 is prime, and thus any system with "7 is not prime" added to the basic axioms would be inconsistent.
What would the 7-denialist you're in a room with say to that? I think it's pretty clear - they'd say that you're making a very elementary mistake, you're just applying modus ponens wrong. In the step where you go from 7 not being factorable into 2, 3, 4, 5 or 6, to 7 being prime, you've committed a logical fallacy, and have not shown that 7 is prime from the basic axioms. Therefore you cannot rule out that 7 is not prime, and your version of modus ponens is therefore not true in every possible universe.
Just because you can use something to prove itself, they say, doesn't mean it's right in every possible universe. You should try to be a little more cosmopolitan and seriously consider that 7 isn't prime.
Replies from: Carinthium↑ comment by Carinthium · 2013-06-07T10:43:23.513Z · LW(p) · GW(p)
I'm guessing you disagree with Elizier's thoughts on Reductionism, then?
The 7-denialists are making a circular argument with your first defence of their posistion. Circular arguments aren't self-evidently wrong, but they are self-evidently not evidence as there isn't justification for believing any of them. The argument for conventional modus ponens is not a circular argument.
The second argument would be that the 7-denialists are making an additional assumption they haven't proven, whilst the Foundationalist Skeptic starts with no assumptions. That there is an inconsistency in 7 being prime needs demonstrating, after all. If you redefine Prime to exclude 7 then it is strictly correct and we don't have a disagreement, but we don't need a different logic for that. (And the standard defintition of Prime is more mathematically useful)
Finally, the Foundationalist Skeptic would argue that they aren't using something to prove itself- they are starting from no starting assumptions whatsoever. I have concluded, as I mentioned, that there is a problem with their posistion, but not the one you claim.
comment by Creutzer · 2013-05-29T15:49:42.301Z · LW(p) · GW(p)
Finally, how do I speak intelligently on the Contextualist v.s Invariantist problem? I can see in basic that it is an empirical problem and therefore not part of abstract philosophy, but that isn't the same thing as having an answer. It would be good to know where to look up enough neuroscience to at least make an intelligent contribution to the discussion.
Invariantism, in my opinion, is rooted precisely in the failure to recognize that this is an empirical and ultimately linguistic question. I'm not sure how neuroscience would enter into it, actually. Once you recognize that it's an empirical issue, it becomes obvious that the usage of various epistemological terms - like that of most other terms - is highly context-dependent. (If you don't think this is obvious, have a look at experimental philosophy.) With that usage, you have an actual explanandum, and if you want a theory that derives the associated phenomena - well, do linguistics and cognitive psychology and stop calling it philosophy, because it isn't. (Of course, the problem is ridiculously hard, because nobody has a good model of how lexical meaning relates to or even depends on context, even though it obviously does.)
Note: The "you" in this comment are intended generically, not referring particularly to the OP or any reader.
Replies from: Carinthium↑ comment by Carinthium · 2013-05-30T04:14:08.092Z · LW(p) · GW(p)
At least some invariantists do tend to look up cognitive evidence, so your argument is not totally correct. You're probably right overall, but I'm still not sure- the Invariantist tends to argue using Warranted Assertability manuveres that distinguish being warranted in asserting X from believing X.
Replies from: Creutzer↑ comment by Creutzer · 2013-05-30T06:09:40.698Z · LW(p) · GW(p)
The most immediate problem for this approach is that it's not clear how it could work for embedded contexts.
The other is, of course, to spell out the context-independent meaning and explain precisely how pragmatics operates on it. It's also not clear that this notion of a strong semantics-pragmatics divide with independent and invariant semantic meanings is tenable in general.
comment by Viliam_Bur · 2013-05-29T10:30:01.279Z · LW(p) · GW(p)
Not a philosophy student, but it seems to me that your question is basicly this:
If everything is uncertain (including reality, state of my brain, etc.), how can I become certain about anything?
And the answer is:
Taking your question literally, you can't.
In real life, we don't take it literally. We don't start by feeling uncertain about literally everything at the same time. We take some things as granted and most people don't examine them (which is functionally equivalent to having axioms); and some people examine them step by step, but not all at the same time (which is functionally equivalent to circular reasoning).
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T12:44:12.765Z · LW(p) · GW(p)
Not quite- I had several questions, and you're somewhat misinterpreting the one you've discussing. I'll try and clarify it to you. There are two sides in the argument, the Foundationalists (mostly skeptics) and the Coherentists. So far I've been Foundationalist but not committed on skepticism. Logically of course there is no reason to assume that one or the other is the only possible posistion, but it makes a good heuristic for quick summary of what's been covered so far.
-The Foundationalists in this particular argument are Strong Foundationalists (weak Foundationalism got thrown out at the beginning), who contend that you can only rationally believe something if you can justify it based on self-evident truths (in the sense that they must be true in any possible universe) or if you can infer them from such truths.
-The Coherentists in this particular argument contend basically that all beliefs are ultimately justified by reference to each other. This is circular, and yet justified.
-The Foundationalists have put the contention that probability is OFF THE TABLE. This is because it is impossible to create a concept of probability that is not simply a subjective feeling that does not rest on the presumption that empirical evidence is valid (which they dispute). This gets back to their argument that it is IRRATIONAL to believe in the existence of the world.
-The Coherentists countered with the concept of "tenability"- believing X provisionally but willing to discard it should new evidence come along.
-I have already, arguing close to the Foundationalist side, pointed out that just because humans DO reason in a certain way in practice does not give any reason for believing it is a valid form of reasoning.
-Both sides have agreed that purely circular arguments are off the table. Hence, both the Foundationalists and the Coherentists have agreed not to use any reference to actual human behaviour to justify one theory over the other.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-05-29T14:50:09.784Z · LW(p) · GW(p)
Could you give me examples of "self-evident truths" other than mathematical equations or tautologies? To me it seems that if you are allowed to use only things that are true in all possible universes, you can only get to conclusions that are true in all possible universes. (In other words, there is no way I could ever believe "my name is Viliam" using only the Strong Foundationalist methods.)
Replies from: Carinthium↑ comment by Carinthium · 2013-05-30T01:12:26.676Z · LW(p) · GW(p)
Yes, the Foundationalist would agree with that. They would not see a problem with it- that is the legitimate limit of knowledge.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-05-30T11:37:57.634Z · LW(p) · GW(p)
Well, in that case the intuitive answer would be that the Foundationalists have successfully argued themselves into a spectacularly convincing corner, and meanwhile I'll just be over here using all this "unverifiable" "knowledge" to figure out how to deal with the "real" "world".
And in any case, if you're invoking an Evil Demon you're lost regardless, it's the epistemologic equivalent of "but what if all your arguments are actually wrong and you just can't see it", to which the answer would be "In that case I am quite hopelessly lost but it doesn't look that way to me, and what more do you expect me to say?"
I suppose an argument could be made that "if such a thing as evolution exists it seems implausible for it to create a brain that expends an awful lot of food intake on being irrepairably wrong about the things it knows, and if not even evolution exists our view of the cosmos is so lost as to be irrepairable regardless".
Sometimes I wonder if philosophy should be taught in a largely noun-free environment. (Points for correct answers, points deducted for Noun Usage?) Get people's minds off the what, and on the how and why. Obsession with describing states will be the death of philosophy...
Replies from: Carinthium↑ comment by Carinthium · 2013-05-30T13:01:46.958Z · LW(p) · GW(p)
Firstly, you're getting mixed up. The Foundationalist side are trying to downplay the Evil Demon Argument as much as possible whilst the Coherentist side claims it refutes Foundationalism as it means nothing can be known.
Both sides plus myself plus practically everybody agrees that just because intuition states X doesn't mean X is true. So how can you invoke it with any plausibility in a debate?
IF evolution works as suspected, there are still other ways that humans could survive other than correlation of beliefs with reality depending on how everything else works.
comment by Ben Pace (Benito) · 2013-05-29T00:30:25.113Z · LW(p) · GW(p)
To combat skepticism, or at least solipsism, you just need to realise that there are no certainties, but that does mean you know nothing. You can work probabilistically.
Consider: http://lesswrong.com/lw/mn/absolute_authority/ http://lesswrong.com/lw/mo/infinite_certainty/ http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/ http://wiki.lesswrong.com/wiki/Absolute_certainty
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T01:36:18.892Z · LW(p) · GW(p)
As of right now, the problem is with defending the concept of probability. The argument put is that:
-Either probability is a subjective feeling (and thus invalid) or it rests on empirical evidence. But empirical evidence is the thing being disputed firstly, and secondly if empirical evidence is dependent upon the concept of probability and probability is dependent on the concept of empirical evidence you have a direct circular argument.
The Coherentist side conceded the untenability of a direct circular argument and instead argued their knowledge was based not on probability as such but tenability (I believe X until I see evidence or argument to discredit it). A strong argument, but it throws probability as such out the window.
comment by [deleted] · 2013-05-28T19:49:05.060Z · LW(p) · GW(p)
This book might be what you are looking for. It's Evidence and Inquiry by Susan Haack. I have it, but I've only done a few very cursory skims of it (ETA: It's on my summer reading list, though). It has two very positive reviews on Amazon. Also, she calls out the Gettier "paradoxes" for what they are (for the most part, pointless distractions).
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T01:05:56.507Z · LW(p) · GW(p)
Got a pretty long reading list right now. I'll go through it when I have time, though.
comment by Carinthium · 2013-10-01T05:08:34.403Z · LW(p) · GW(p)
I doubt people are actually still interesting, but just in case I've actually managed to solve this problem.
IF the Correspondence Theory of Truth is assumed (defining "Truth" as that which corresponds to reality) and the assumption is made that philosophy should pursue truth rather than what is pragmatically useful, then for any non-Strong Foundationalist method of determining truth the objection could be made that it could easily have no correlation with reality and there would be no way of knowing.
Probabalistic arguments fall apart because they would require accepting axioms of probability that cannot themselves be demonstrated. Although they could be made definitions, that does not demonstrate any usefulness in dealing with reality.
comment by Shmi (shminux) · 2013-05-30T03:53:43.841Z · LW(p) · GW(p)
Not trying to answer your questions, sorry. Just wanted to mention that different philosophical camps pattern-match to different denominations of the same religion. They keep arguing without any hope of agreeing. Occasionally some denominations prevail and others die out, or get reborn when a new convincing guru or a prophet shows up. If you have a strong affinity for theism, err, mainstream philosophy, just pick whichever denomination you feel like, or whichever gives you the best chance of advancement. If you care about that real world thing, consider deconversion.
P.S.
an Evil Demon could theoretically fool my reason into thinking that it had reasoned correctly when it hadn't
Theoretically??? Human mind is full of these, they are called cognitive biases.
Replies from: Carinthium↑ comment by Carinthium · 2013-05-30T13:07:10.503Z · LW(p) · GW(p)
Firstly, Elizier is amongst other things a philosopher. Secondly, he does argue with others using arguments that are not purely empirical. If you're not a follower of his this is no criticism, but if you accept this argument you must reject philosophy.
Secondly, that there are several warring camps with no agreement does not imply none of them are right. It probably means that some people are being overly stubborn, but not that none are correct. In religion, the Athiests (who have taken a side in many religious debates) are right as it happens. There is a real problem with philosophy and rationality, but you don't have the solution.
The Evil Demon argument is the extreme case which fools people about EVERYTHING. My apologies if that was unclear.
comment by BerryPick6 · 2013-05-29T13:36:19.941Z · LW(p) · GW(p)
Here is one hand...
comment by falenas108 · 2013-05-28T17:10:08.409Z · LW(p) · GW(p)
(Then again, it has been argued, if a Coherentist were decieved by an evil demon they could be decieved into thinking data coheres when it doesn't. Since their belief rests upon the assumption that their beliefs cohere, should they not discard if they can't know if it coheres or not? The seems to cohere formulation has it's own problem)
Doesn't Coherentism idea say that even if the knowledge is incorrect, it is still "true" for the observer because it coheres with the rest of their beliefs?
The opinion Eliezer says is essentially that yes, you can't know anything, but at some point you have to act and acting as if you have knowledge leads to better outcomes. Yes, this ignores the problem of induction. The justification for this is that it works, and even if it can be proven, it gets results.
Also, advice for "losing:" Don't think of it as losing. Don't identify as a Foundationalist, identify as someone who's trying to find the truth, and according to your beliefs in the past Foundationalism seemed like the most likely answer. Now, you have evidence that this isn't the case, and should change beliefs accordingly.
Replies from: Carinthium↑ comment by Carinthium · 2013-05-29T01:04:32.377Z · LW(p) · GW(p)
I like your advice about losing and will take it unless I find a brilliant Foundationalist argument pretty soon. As for the rest, though, ignoring the problem of induction means conceding that all action and belief is irrational. Unless the senses and memory can be considered trustworthy (not demonstrated), it is irrational to use it as evidence for better outcomes.
Replies from: falenas108↑ comment by falenas108 · 2013-05-29T04:34:51.459Z · LW(p) · GW(p)
By irrational, do you mean philosophically or in real life? Because someone who acted like there was no knowledge would do pretty terribly in life, and I would not call that rational.
If you mean philosophically, then yes. I've never heard a good answer to the problem of induction that doesn't invoke God or isn't circular.
comment by [deleted] · 2013-05-28T14:06:40.185Z · LW(p) · GW(p)
You might make some progress by stating what philosophy is not, then by stating what philosophy can be but should not be.
For myself: philosophy is not explaining by not-explaining (example: "it was God!"). Philosophy can but should not refer only to itself.