Concepts Don't Work That Way

post by lukeprog · 2011-09-28T02:01:23.614Z · LW · GW · Legacy · 92 comments

Contents

  Conceptual Analysis
  Concepts in the Brain
  References
None
92 comments

Part of the sequence: Rationality and Philosophy

Philosophy in the Flesh, by George Lakoff and Mark Johnson, opens with a bang:

The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical.

These are three major findings of cognitive science. More than two millennia of a priori philosophical speculation about these aspects of reason are over. Because of these discoveries, philosophy can never be the same again.

When taken together and considered in detail, these three findings... are inconsistent with central parts of... analytic philosophy...

This book asks: What would happen if we started with these empirical discoveries about the nature of mind and constructed philosophy anew?

...A serious appreciation of cognitive science requires us to rethink philosophy from the beginning, in a way that would put it more in touch with the reality of how we think.

So what would happen if we dropped all philosophical methods that were developed when we had a Cartesian view of the mind and of reason, and instead invented philosophy anew given what we now know about the physical processes that produce human reasoning?

What emerges is a philosophy close to the bone. A philosophical perspective based on our empirical understanding of the embodiment of mind is a philosophy in the flesh, a philosophy that takes account of what we most basically are and can be.

Philosophy is a diseased discipline, but good philosophy can (and must) be done. I'd like to explore how one can do good philosophy, in part by taking cognitive science seriously.


Conceptual Analysis

Let me begin with a quick, easy example of how cognitive science can inform our philosophical methodology. The example below shouldn’t surprise anyone who has read A Human’s Guide to Words, but it does illustrate how misguided thousands of philosophical works can be due to an ignorance of cognitive science.

Consider what may be the central method of 20th century analytic philosophy: conceptual analysis. In its standard form, conceptual analysis assumes (Ramsey 1992) the “classical view” of concepts, that a “concept C has definitional structure in that it is composed of simpler concepts that express necessary and sufficient conditions for falling under C.” For example, the concept bachelor has the constituents unmarried and man. Something falls under the concept bachelor if and only if it is an unmarried man.

Conceptual analysis, then, is the attempt to examine our intuitive concepts and arrive at definitions (in terms of necessary and sufficient conditions) that capture the meaning of those concepts. De Paul & Ramsey (1999) explain:

Anyone familiar with Plato's dialogues knows how [conceptual analysis] is conducted. We see Socrates encounter someone who claims to have figured out the true essence of some abstract notion... the person puts forward a definition or analysis of the notion in the form of necessary and sufficient conditions that are thought to capture all and only instances of the concept in question. Socrates then refutes his interlocutor's definition of the concept by pointing out various counterexamples...

For example, in Book I of the Republic, when Cephalus defines justice in a way that requires the returning of property and total honesty, Socrates responds by pointing out that it would be unjust to return weapons to a person who had gone mad or to tell the whole truth to such a person.... [The] proposed analysis is rejected because it fails to capture our intuitive judgments about the nature of justice.

After a proposed analysis or definition is overturned by an intuitive counterexample, the idea is to revise or replace the analysis with one that is not subject to the counterexample. Counterexamples to the new analysis are sought, the analysis revised if any counterexamples are found, and so on...

The practice continues even today. Consider the conceptual analysis of knowledge. For centuries, knowledge was considered by most to be justified true belief (JTB). If Susan believed X but X wasn’t true, then Susan couldn’t be said to have knowledge of X. Likewise, if X was true but Susan didn’t believe X, then she didn’t have knowledge of X. And if Susan believed X and X was true but Susan had no justification for believing X, then she didn’t really have “knowledge,” she just had an accidentally true belief. But if Susan had justified true belief of X, then she did have knowledge of X.

And then Gettier (1963) offered some famous counterexamples to this analysis of knowledge. Here is a later counterexample, summarized by Zagzebski (1994):

...imagine that you are driving through a region in which, unknown to you, the inhabitants have erected three barn facades for each real barn in an effort to make themselves look more prosperous. Your eyesight is normal and reliable enough in ordinary circumstances to spot a barn from the road. But in this case the fake barns are indistinguishable from the real barns at such a distance. As you look at a real barn you form the belief 'That's a fine barn'. The belief is true and justified, but [intuitively, it isn’t knowledge].

As in most counterexamples to the JTB analysis of knowledge, the counterexample to JTB arises due to “accidents” in the scenario:

It is only an accident that visual faculties normally reliable in this sort of situation are not reliable in this particular situation; and it is another accident that you happened to be looking at a real barn and hit on the truth anyway... the [counter-example] arises because an accident of bad luck is cancelled out by an accident of good luck.

A cottage industry sprung up around these “Gettier problems,” with philosophers proposing new sets of necessary and sufficient conditions for knowledge, and other philosophers raising counter-examples to them. Weatherson (2003) described this circus as “the analysis of knowledge merry-go-round.”

My purpose here is not to examine Gettier problems in particular, but merely to show that the construction of conceptual analyses in terms of necessary and sufficient conditions is mainstream philosophical practice, and has been for a long time.

Now, let me explain how cognitive science undermines this mainstream philosophical practice.

 

Concepts in the Brain

The problem is that the brain doesn’t store concepts in terms of necessary and sufficient conditions, so philosophers have been using their intuitions to search for something that isn’t there. No wonder philosophers have, for over a century, failed to produce a single, successful, non-trivial conceptual analysis (Fodor 1981; Mills 2008).

How do psychologists know the brain doesn’t work this way? Murphy (2002, p. 16) writes:

The groundbreaking work of Eleanor Rosch in the 1970s essentially killed the classical view, so that it is not now the theory of any actual [scientific] researcher...

But before we get to Rosch, let’s look at a different experiment:

McCloskey and Glucksberg (1978)... found that when people were asked to make repeated category judgments such as ‘‘Is an olive a fruit?’’ or ‘‘Is a dog an animal?’’ there was a subset of items that individual subjects changed their minds about. That is, if you said that an olive was a fruit on one day, two weeks later you might give the opposite answer. Naturally, subjects did not do this for cases like ‘‘Is a dog an animal?’’ or ‘‘Is a rose an animal?’’ But they did change their minds on borderline cases, such as olive-fruit, and curtains-furniture. In fact, for items that were intermediate between clear members and clear nonmembers, McCloskey and Glucksberg’s subjects changed their mind 22% of the time. This may be compared to inconsistent decisions of under 3% for the best examples and clear nonmembers... Thus, the changes in subjects’ decisions do not reflect an overall inconsistency or lack of attention, but a bona fide uncertainty about the borderline members. In short, many concepts are not clear-cut. There are some items that... seem to be “kind of” members. (Mills 2002, p. 20)

Category-membership for concepts in the human brain is not a yes/no affair, as the “necessary and sufficient conditions” approach of the classical view assumes. Instead, category membership is fuzzy.

Another problem for the classical view is raised by typicality effects:

Think of a fish, any fish. Did you think of something like a trout or a shark, or did you think of an eel or a flounder? Most people would admit to thinking of something like the first: a torpedo-shaped object with small fins, bilaterally symmetrical, which swims in the water by moving its tail from side to side. Eels are much longer, and they slither; flounders are also differently shaped, aren’t symmetrical, and move by waving their body in the vertical dimension. Although all of these things are technically fish, they do not all seem to be equally good examples of fish. The typical category members are the good examples — what you normally think of when you think of the category. The atypical objects are ones that are known to be members but that are unusual in some way... The classical view does not have any way of distinguishing typical and atypical category members. Since all the items in the category have met the definition’s criteria, all are category members.

...The simplest way to demonstrate this phenomenon is simply to ask people to rate items on how typical they think each item is of a category. So, you could give people a list of fish and ask them to rate how typical each one is of the category fish. Rosch (1975) did this task for 10 categories and looked to see how much subjects agreed with one another. She discovered that the reliability of typicality ratings was an extremely high .97 (where 1.0 would be perfect agreement)... In short, people agree that a trout is a typical fish and an eel is an atypical one. (Mills 2002, p. 22)

So people agree that some items are more typical category members than others, but do these typicality effects manifest in normal cognition and behavior?

Yes, they do.

Rips, Shoben, and Smith (1973) found that the ease with which people judged category membership depended on typicality. For example, people find it very easy to affirm that a robin is a bird but are much slower to affirm that a chicken (a less typical item) is a bird. This finding has also been found with visual stimuli: Identifying a picture of a chicken as a bird takes longer than identifying a pictured robin (Murphy and Brownell 1985; Smith, Balzano, and Walker 1978). The influence of typicality is not just in identifying items as category members — it also occurs with the production of items from a category. Battig and Montague (1969) performed a very large norming study in which subjects were given category names, like furniture or precious stone and had to produce examples of these categories. These data are still used today in choosing stimuli for experiments (though they are limited, as a number of common categories were not included). Mervis, Catlin and Rosch (1976) showed that the items that were most often produced in response to the category names were the ones rated as typical (by other subjects). In fact, the average correlation of typicality and production frequency across categories was .63, which is quite high given all the other variables that affect production.

When people learn artificial categories, they tend to learn the typical items before the atypical ones (Rosch, Simpson, and Miller 1976). Furthermore, learning is faster if subjects are taught on mostly typical items than if they are taught on atypical items (Mervis and Pani 1980; Posner and Keele 1968). Thus, typicality is not just a feeling that people have about some items (“trout good; eels bad”) — it is important to the initial learning of the category in a number of respects...

Learning is not the end of the influence, however. Typical items are more useful for inferences about category members. For example, imagine that you heard that eagles had caught some disease. How likely do you think it would be to spread to other birds? Now suppose that it turned out to be larks or robins who caught the disease. Rips (1975) found that people were more likely to infer that other birds would catch the disease when a typical bird, like robins, had it than when an atypical one, like eagles, had it... (Murphy 2002, p. 23)

(If you want further evidence of typicality effects on cognition, see Murphy [2002] and Hampton [2008].)

The classical view of concepts, with its binary category membership, cannot explain typicality effects.

So the classical view of concepts must be rejected, along with any version of conceptual analysis that depends upon it. (If you doubt that many philosophers have done work dependent on the classical view of concepts, see here).

To be fair, quite a few philosophers have now given up on the classical view of concepts and the “necessary and sufficient conditions” approach to conceptual analysis. And of course there are other reasons that seeking definitions stipulated as necessary and sufficient conditions can be useful. But I wanted to begin with a clear and “settled” case of how cognitive science can undermine a particular philosophical practice and require that we ask and answer philosophical questions differently.

Philosophy by humans must respect the cognitive science of how humans reason.

 

Next post: Living Metaphorically

Previous post: When Intuitions Are Useful

 

 

References

Battig & Montague (1969). Category norms for verbal items in 56 categories: A replication and extension of the Connecticut category norms. Journal of Experimental Psychology Monograph, 80 (3, part 2).

Gettier (1963). Is justified true belief knowledge? Analysis, 23: 121-123.

De Paul & Ramsey (1999). Preface. In De Paul & Ramsey (eds.), Rethinking Intuition. Rowman & Littlefield.

Fodor (1981). The present status of the innateness controversy. In Fodor, Representations: Philosophical Essays on the Foundations of Cognitive Science. MIT Press.

Hampton (2008). Concepts in human adults. In Mareschal, Quinn, & Lea (eds.), The Making of Human Concepts (pp. 295-313). Oxford University Press.

McCloskey and Glucksberg (1978). Natural categories: Well defined or fuzzy sets? Memory & Cognition, 6: 462–472.

Mervis, Catlin & Rosch (1976). Categorization of natural objects. Annual Review of Psychology, 32: 89–115.

Mervis & Pani (1980). Acquisition of basic object categories. Cognitive Psychology, 12: 496–522.

Mills (2008). Are analytic philosophers shallow and stupid? The Journal of Philososphy, 105: 301-319.

Murphy (2002). The Big Book of Concepts. MIT Press.

Murphy & Brownell (1985). Category differentiation in object recognition: Typicality constraints on the basic category advantage. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11: 70–84.

Posner & Keele (1968). On the genesis of abstract ideas. Journal of Experimental Psychology, 77: 353–363.

Rips (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Verbal Behavior, 14: 665–681.

Ramsey (1992). Prototypes and conceptual analysis. Topoi 11: 59-70.

Rips, Shoben, & Smith (1973). Semantic distance and the verification of semantic relations. Journal of Verbal Learning and Verbal Behavior, 12: 1–20.

Rosch (1975). Cognitive representations of semantic categories. Journal of Experimental Psychology: General, 104: 192–233.

Rosch, Simpson, & Miller (1976). Structural bases of typicality effects. Journal of Experimental Psychology: Human Perception and Performance, 2: 491–502.

Smith, Balzano, & Walker (1978). Nominal, perceptual, and semantic codes in picture categorization. In Cotton & Klatzky (eds.), Semantic Factors in Cognition (pp. 137–168). Erlbaum.

Weatherson (2003). What good are counterexamples? Philosophical Studies, 115: 1-31.

92 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2011-09-28T09:30:05.134Z · LW(p) · GW(p)

I always find it a red flag when it seems like an entire group of highly-educated people is doing something ridiculously stupid. If assuming the brain thinks in terms of necessary-and-sufficient would be really stupid, maybe that's not what conceptual analysts are doing.

The idea that our brain's fuzzy type-1 thinking can be translated into precise type-2 thinking is one of the foundations of science and mathematics, not to mention philosophy. I'd been drawing and seeing circles for years as a child before I learned that they were the 2-D set of points equidistant from a center point, but this latter definition accurately captures a necessary and sufficient condition for circles. Anyone who says "your brain doesn't really process circles based on that definition, it's just pattern-matching other circles you've seen" would be missing the point.

And this process sometimes works even with natural categories. Wikipedia defines "birds" as "feathered, winged, bipedal, endothermic (warm-blooded), egg-laying, vertebrate animals", and as far as I can tell, this is necessary and sufficient for birds (some sources say kiwis are wingless, but others say they have small, vestigial wings). Although birds are the classic example of a fuzzy mental category, like the "circle" category it turns out to have a pretty good necessary-and-sufficient definition after all. Likewise, "fish" are "all gill-bearing aquatic vertebrate animals that lack limbs with digits".

Even if the above turn out to be incorrect (we find an animal we intuitively classify as a fish that does not meet that definition), it's still interesting and potentially useful to have a prediction rule that works 99+% of the time. And someone who started out believing whales to be fish could, armed with such a 99%-rule, correct her error.

So even if our brains don't naturally think in terms of necessary-and-sufficient, it's not immediately obvious that it's stupid and impossible to try to come up with necessary-and-sufficient conditions for our categories.

There may be some sets of borders in thingspace which are better than others, in the same way that there are some borders for an independent Palestinian state that are better than others (even though we're not sure exactly where the border should be, sticking Tel Aviv in Palestine, or Ramallah in Israel, would be a mistake).

Fixing borders in thingspace can determine the status of edge cases. Sometimes this can even be useful; for example, if I am allergic to fish, then having a correct boundary for "fish" will let me know I can safely eat whale meat. I may not know exactly what chemical in fish causes my allergic reaction, or even know that allergy is an immune reaction to specific chemicals - but being able to draw the category boundaries accurately will "miraculously" predict that whale meat will not trigger my allergy.

Or more realistically, coming up with a set of criteria for "good" will help me determine that stoning homosexuals is bad, even if I previously didn't realize this. And philosophers have come up with some pretty good definitions for "good" that can do this - not universally accepted, by any means, but useful to those who know them.

I think you have a strong case against conceptual analysis, and some very intelligent commentary on where conceptual analysis does work - but it's all in Conceptual Analysis and Moral Theory. This post seems to be attacking a straw man by accusing conceptual analysts of necessarily trying to model the human brain and then doing a bad job of it.

Replies from: lukeprog
comment by lukeprog · 2011-09-28T19:27:22.625Z · LW(p) · GW(p)

So even if our brains don't naturally think in terms of necessary-and-sufficient, it's not immediately obvious that it's stupid and impossible to try to come up with necessary-and-sufficient conditions for our categories.

I haven't claimed this, and in fact have specifically denied it. But it is apparently a common reading of my post, so I've added a sentence toward the end to make this clear. Sorry about that.

maybe that's not what conceptual analysts are doing.

I think it is, in many cases. Maybe the clearest argument for this is from Ramsey (1992). I'll quote an extended passage below, though you may want to skip to the part that reads: "At first blush, it might seem a little odd to suppose that conceptual analysis involves any presuppositions about the way our minds work..."

[Discussions of the conflict between conceptual analysis and the psychology of concepts] have been floating around philosophical circles for some time. Perhaps the best known expression of these sentiments is Wittgenstein's discussion of family resemblance concepts in the Investigations, though similar ideas can be found in the writings of other philosophers, including Hilary Putnam (1962), Peter Achinstein (1968), Harold Brown (1988), Terence Horgan (1990), and in particular, Stephen Stich (1990, [1992])...

Conceptual analysis and its underlying assumptions

It would be a bit of an understatement to claim that conceptual analysis has been an important aspect of Western philosophy. Since the writings of Plato, in which Socrates and his cohorts repeatedly attempt to discern the true essence of matters such as piety and justice, philosophers have been in the business of proposing and (more typically) attacking definitions for a huge range of abstract notions. These include such concepts as knowledge, causation, rationality, action, belief, person, justification and morality (to name just a few)... But how does this enterprise get carried out and, perhaps more importantly, what are its underlying assumptions about the way we represent concepts?

Two criteria for definitions

Answering the first question -- i.e., how does conceptual analysis get done? -- is, at first glance, relatively easy: philosophers propose and reject definitions for a given abstract concept by thinking hard about intuitive instances of the concept and trying to determine what their essential properties might be. However, this characterization is really too vague to tell us anything useful. Perhaps a better way to gain insight into conceptual analysis is to consider what is normally expected of the definitions put forth. By looking at the criteria philosophers use for definitions, we can get a firmer grasp on what philosophers are up to and perhaps uncover some of the presuppositions lurking behind this enterprise.

Naturally, there are a number of different criteria commonly invoked by philosophers searching for definitions. Here, I'll focus upon only two... The first of these requirements is that the definitions be relatively straightforward and simple. Indeed, a popular syntactic form assumed for definitions is that of a small set of properties regarded as individually necessary and jointly sufficient for the concept in question. Hence, more often than not philosophical definitions take a syntactic form in which the notorious (at least among copy-editors) "iff" is followed by a short conjunction of properties. Thus, X is knowledge if and only if X is justified, true belief or X is acting freely if and only if X is doing what he or she wants. As with explanatory theories in science, a popular underlying assumption of conceptual analysis is that overly complex and unwieldy definitions are defective, or ad-hocish, even when no better definition is immediately available. If an analysis yields a definition that is highly disjunctive, heavily qualified or involves a number of conditions, a common sentiment is that the philosopher hasn't gotten it right yet. Accordingly, different analyses are typically regarded as competitors, and, for the most part, few people take seriously the idea that the correct analysis might be one involving a disjunctive combination of these alternate definitions. To borrow a technical phrase from Jerry Fodor, analyses of this complex sort are commonly regarded as "yucky'. For many philosophers, a proposed definition should be short and simple.

A second criterion definitions are generally expected to meet is a concern not about their form, but their degree of robustness. If a definition is to count as a real definition, then it is generally assumed that it cannot admit of any intuitive counterexamples. Hence, as we all learned in introductory philosophy, the standard way to gun down a proposed analysis is to find either a noninstance of the concept that possesses the definitional properties in question -- thereby showing that the defining properties are insufficient to capture the concept -- or an instance of the concept that doesn't possess the definitional properties -- thereby showing the defining properties aren't necessary. If counterexamples of this sort can be found, then the proposed definition is typically regarded as inadequate...

Hence, definitions sought by philosophers engaged in conceptual analysis typically must pass at least two tests: they must be relatively simple -- generally a conjunction of individually necessary and jointly sufficient properties -- and it must not admit of any intuitive counterexamples. With this in mind, we can now turn to the question of psychological presuppositions.

Psychological presuppositions of conceptual analysis

At first blush, it might seem a little odd to suppose that conceptual analysis involves any presuppositions about the way our minds work. After all, if people are interested in defining notions like justice or causation, then it's justice or causation that they are concerned with -- not human psychology. Nonetheless, when we look more closely at the criteria for definitions I've just sketched, we can indeed find lurking in the background certain assumptions about human cognition. Perhaps the easiest way to see this is to consider the significant role intuitive categorization judgments play in this type of philosophy. Notice, for example, that for either type of counterexample to actually count as a counterexample, there are going to have to be fairly strong and widely shared intuitions that some particular thing or event either is or is not an instance of the concept in question. In other words, the process of appraising definitions requires comparing and contrasting the definitional set of properties with intuitively judged instances and non-instances of the target concept. Without these intuitive categorization judgments, conceptual analysis as a practice could never get off the ground.

Because of this important role of intuitive judgments, conceptual analysis can't avoid being committed to certain assumptions about the nature of our cognitive system. One such assumption is that there is considerable overlap in the sorts of intuitive categorization judgments that different people make. Without this consensus, an intuitive counterexample for one individual would fail to be an intuitive counterexample for another individual, and no single definition could be agreed upon. Moreover, given that definitions are expected to express simple conjunctions of essential properties and allow no intuitive counterexamples, there also appears to be the fairly strong presumption that our intuitive categorization judgments will coincide perfectly with the presence or absence of a small but specific set of properties. In other words, lurking in the background of this enterprise is the assumption that our intuitions will nicely converge upon a set whose members are all and only those things which possess some particular collection of features. Given that philosophers expect to find tidy conjunctive definitions, and given that they employ intuitions as their guide in this search, the presupposition seems to be that our intuitive categorization judgments will correspond precisely with simple clusters of properties.

Replies from: lukeprog
comment by lukeprog · 2011-09-28T21:20:38.316Z · LW(p) · GW(p)

BTW, Sandin (2006) makes the (correct) reply to Ramsey that seeking (stipulated) necessary-and-sufficient-conditions definitions for concepts can be useful even if Ramsey is right that the classical view of concepts is wrong:

Even if we were to accept that no such [intuitive] definition [of a concept] is to be found, the activity of searching for such definitions need not be pointless. It might well be that we gain something else from the search. Here is one obvious example: We gain definitions that are better than the one we had before.

Also, I admit there are philosophers who disagree with me about what philosophers have been doing all along. See, for example, Nimtz (2009):

First, there is no doubt that the conditions implicitly guiding the application of our terms typically aren't Socratic - i.e., they cannot well be captured by a tidy conjunction of individually necessary and jointly sufficient conditions. But nothing commits a Gricean [conceptual] analysis to Socratic analysanda. It aims for an illuminating general characterisation of a term's application conditions, however complex and untidy those might turn out to be. Arguing that conceptual analysis is an ill-fated enterprise since it seeks Socratic analysanda which aren't to be had, as Kornblith (2007, 41ff) and Ramsey (1998, 165) do, amounts to failing to engage with Gricean analysis in the first place.

However, even this statement admits that conceptual analysis grounded in the Socratic analysanda is doomed. There's been an awful lot of that since Socrates.

Moreover, while I agree that conceptual analysis seeking application conditions for our terms can succeed, this is not the most common notion of what a 'concept' is according to 20th century analytic philosophy. The standard notion of what a concept is - the thing being analyzed - is that it is a kind of mental representation. The problem, then, is that mental representations do not occur in neat bundles of necessary and sufficient conditions.

McBain (2008) recognizes that both sorts of conceptual analysis go on. He calls 'seeking concepts out there' approach "robust conceptual analysis" and the 'seeking concepts in our head' approach "modest conceptual analysis."

He notes that a third form of conceptual analysis may be the dominant one today: "reflective equilibrium." That will be the topic of another post of mine.

comment by Mitchell_Porter · 2011-09-27T07:26:20.972Z · LW(p) · GW(p)

Why can't conceptual analysis be regarded as "Coherent Extrapolated Cognition"? Just because people are vague in their thinking doesn't mean that clarity is a vice.

ETA: I'm going to try to stay away from LW for at least a month, in the hope that this sequence will be finished by the time I revisit. I know I'm going to fundamentally disagree with a lot of it, but better to wait until it's done rather than quarrel with it piecemeal.

Replies from: roystgnr, Manfred, Vaniver, lukeprog, fortyeridania
comment by roystgnr · 2011-09-28T16:51:43.157Z · LW(p) · GW(p)

I'd much rather see you quarrel with things piecemeal. "This long chain of logic is wrong" is much less satisfying to me than "This step here from lemma 4 to theorem 5 is wrong". The former may make for a better-sounding essay, but it's also harder to distinguish from rationalization and harder for readers to verify.

Also, why think of it as a "quarrel" at all? If lukeprog is making mistakes that are incidental to his main theses, then convincing him of that as soon as possible will give him more time to revise and improve his work. If he's making mistakes that are integral to his main theses, then convincing him of that as soon as possible will avoid wasted time finishing a red-herring sequence. And even if he's not really making mistakes at all, then letting him know what apparent-mistakes are being perceived will help him improve the clarity of his work. You don't seem to have difficulty expressing criticism in a non-antagonistic way, and polite intelligent criticism is a positive thing, even for the (epistemically rational) person whose ideas are being criticized.

comment by Manfred · 2011-09-27T18:11:04.505Z · LW(p) · GW(p)

Why can't conceptual analysis be regarded as "Coherent Extrapolated Cognition"? Just because people are vague in their thinking doesn't mean that clarity is a vice.

Because if you take a bunch of human brains and average their conception of "justice," you will get something that never in a million years would have been produced by conceptual analysis. It will have parameters and weighting and nonlinearity and no sign of "necessary and sufficient."

comment by Vaniver · 2011-09-27T13:16:33.561Z · LW(p) · GW(p)

Why can't conceptual analysis be regarded as "Coherent Extrapolated Cognition"? Just because people are vague in their thinking doesn't mean that clarity is a vice.

This comparison makes me more pessimistic about CEV.

Replies from: Solvent
comment by Solvent · 2011-09-28T08:51:03.872Z · LW(p) · GW(p)

Why?

Replies from: Vaniver
comment by Vaniver · 2011-09-28T13:49:50.436Z · LW(p) · GW(p)

Because if CEV is the metaethical analog of conceptual analysis, then it seems more likely to me to be mistaken. That may not be the intended analogy.

comment by lukeprog · 2011-09-27T08:04:48.921Z · LW(p) · GW(p)

A sequence done in one month? Clearly, you've haven't been paying attention to my other sequences. :)

Why can't conceptual analysis be regarded as "Coherent Extrapolated Cognition"?

Coherent extrapolated cognition? Sounds like the process of reflective equilibrium, another standard tool of philosophy. I'll address that in future posts.

Just because people are vague in their thinking doesn't mean that clarity is a vice.

Certainly not!

comment by fortyeridania · 2011-09-27T14:50:37.845Z · LW(p) · GW(p)

You're going to skip the whole sequence? As long as your quarrels relate to the substance, why not share? If there are problems with the material presented, I'm sure the rest of us would rather hear them than assume the material is problem-free.

Or do you mean you'll give us a rebuttal when the sequence is done?

comment by kilobug · 2011-09-27T16:32:12.921Z · LW(p) · GW(p)

Yes, nothing much new for LW readers (since it's mostly covered by the "human guide to words" sequence), but still important point to re-harsh, and get people to read even if they are scared by the sequences. It's so painful to argue with someone who thinks words are precisely defined as Aristotle classes, and say things like "I've nothing against gay couples, but gay marriage is just impossible by definition". And yet when asked "what is a mother ?" they'll answer "someone who gave birth to a child" and when asked "what about adoption ?" and then they"ll revise their definition... Or when asked "what is a bird ?" they'll answer "something with feathers that flies", and when pointed to penguins, they'll revise their definition...

Lots of pointless arguments would be saved if more people were aware that words are fuzzy boundaries, not precise definitions (unless you're working in a very formal science, like maths).

comment by Jayson_Virissimo · 2011-09-28T10:06:55.640Z · LW(p) · GW(p)

“I think good philosophy basically just is cognitive science, plus math.”

What is mathematics, but the purest form of conceptual analysis?

Replies from: dxu
comment by dxu · 2015-04-17T20:59:36.470Z · LW(p) · GW(p)

Math is rigorous. Most conceptual analysis is not.

EDIT: Also of note is the fact that mathematics has its own special language that is completely formalized. Philosophers, on the other hand, insist on using natural language to perform conceptual analysis, which has a number of problems.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2015-04-18T03:12:33.467Z · LW(p) · GW(p)

Are you claiming that math is not a kind of conceptual analysis or just that the conceptual analysis in math is more rigorous (less likely to produce bad results because it relies less on natural language)?

Replies from: dxu
comment by dxu · 2015-04-18T03:18:08.453Z · LW(p) · GW(p)

The second, as well as observing that as far as the task of describing what philosophers actually do is concerned, labeling math as a member of the category "conceptual analysis" isn't very helpful.

comment by Protagoras · 2011-09-27T11:01:24.176Z · LW(p) · GW(p)

Though philosophers have certainly not always been clear about what they are doing, much of the time they are probably better described as trying to find better concepts (better in respects including being clearer and more sharply defined) rather than trying to figure out what concepts we currently have. This is certainly true of Plato; the counter-examples in Plato aren't meant to show that, for example, Cephalus isn't accurately describing his own concept of justice, they're meant to show that the concept of justice Cephalus has is problematic and better concepts are needed. This is somewhat obscured by the fact that in the dialogues Socrates tries to show that people's concepts are problematic by their own standards, and more obscured by the rhetoric (including the doctrine of recollection) which phrases that in terms of somehow already possessing the better concepts without fully realizing it, but despite the rhetoric, Plato is clearly a conceptual reformer.

I take it that Mitchell Porter and Carinthium are making similar points, if I understand them correctly.

Replies from: lukeprog
comment by lukeprog · 2011-09-27T15:47:10.120Z · LW(p) · GW(p)

As I said in my response to Mitchell Porter, I'll get to reflective equilibrium later. This here is a post about the very common philosophical practice of looking for what our current concepts are. There are of course many who look for "reforming accounts" of our concepts, e.g. Brandt.

comment by Bum · 2011-09-28T02:49:06.205Z · LW(p) · GW(p)

I think we can exaggerate the impact of this sort of cognitive science on philosophy. It's very important IF we start from the assumption, as most philosophy has since the 17th century, that we won't figure anything out until we can figure out how the mind thinks and what sorts of things it can think about. That is certainly one way to do philosophy, and still an important branch of philosophy today, but by no means is it any longer considered to be First Philosophy. For example, it's hard to see how much of Lakoff's work will be relevant to contemporary metaphysics. Understanding the mind's classificatory mechanisms does little to help us understand the nature of necessity and possibility, of time and space. To some degree, absolutely, it's not irrelevant. For example, some of Lakoff's work on how spatial information is encoded in metaphor is important for understanding how we conceive of space, and whether a certain conception is an illusion of thought or revelatory of the nature of space. But no very interesting or central question in the metaphysics of space has been or likely will be solved that way, but that is as much real philosophy as the sort discussed in this post.

comment by CG_Morton · 2011-09-27T13:48:01.366Z · LW(p) · GW(p)

I think this is a little unfair. For example, I know exactly what the category 'fish' contains. It contains eels and it contains flounders, without question. If someone gives me a new creature, there are things that I can do to ascertain whether it is a fish. The only question is how quickly I could do this.

We pattern-match on 'has fins', 'moves via tail', etc. because we can do that fast, and because animals with those traits are likely to share other traits like 'is billaterally symetrical' (and perhaps 'disease is more likely to be communicable from similarly shaped creatures'). But that doesn't mean the hard-and-fast 'fish' category is meaningless; there is a reason dolphins aren't fish.

Replies from: roystgnr, lukeprog, scav, jmmcd
comment by roystgnr · 2011-09-28T17:04:27.235Z · LW(p) · GW(p)

If someone gives me a new creature, there are things that I can do to ascertain whether it is a fish. The only question is how quickly I could do this.

I'm guessing you'd quickly say "yes" for Panderichthys and "no" for Acanthostega... but what about Tiktaalik? Or if that's too easy to answer (which answer?), pick any clear amphibian and start looking at its ancestors. Is there a clear line where "this is not a fish, but its mother is"?

We think of ring species as rare populations with interesting spatial distributions, but thanks to common descent every living thing is part of one big multi-ring species with a very interesting space-time distribution. It's hard to categorize living things, in part because the obvious ideas for equivalence relations turned out to not be inherently transitive.

comment by lukeprog · 2011-09-27T15:50:18.009Z · LW(p) · GW(p)

If someone gives me a new creature, there are things that I can do to ascertain whether it is a fish. The only question is how quickly I could do this.

Are you talking about the biologist's stipulated definition of "fish"? This is different than one's intuitive concept.

Replies from: CG_Morton
comment by CG_Morton · 2011-09-27T16:58:53.249Z · LW(p) · GW(p)

I see what you're getting at with the intuitive concept (and philosophy matching how people actually are, rather than how they should be), but human imperfection seems to open the door to a whole lot of misunderstanding. Like, if someone said we were having fish for dinner, and then served duck, because they thought anything that swims is a fish, well I'd be put out to say the least.

I think my intuition is that my understanding of various concepts should approach the strictness of conceptual analysis. But maybe that's just vanity. After all, border cases can easily be specified (if we're having eel, just say 'eel' rather than 'fish').

Replies from: lukeprog
comment by lukeprog · 2011-09-27T17:04:07.688Z · LW(p) · GW(p)

I think my intuition is that my understanding of various concepts should approach the strictness of conceptual analysis.

Sure. But that normative claim is different than the descriptive claim I made about concepts.

Replies from: Morendil
comment by Morendil · 2011-09-28T06:46:27.981Z · LW(p) · GW(p)

But which philosopher(s) are you claiming made the opposite, false descriptive claim, namely that our brains represent them that way?

Replies from: lukeprog
comment by lukeprog · 2011-09-28T20:05:26.155Z · LW(p) · GW(p)

Tons of 'em. See here.

Replies from: Morendil
comment by Morendil · 2011-09-28T20:27:30.419Z · LW(p) · GW(p)

I looked where you asked me to look, and I still don't know who you're referring to. Could be me being stupid, but if so that's not my intention.

Would you give me the name of one philosopher who says that concepts are encoded in the human brain in terms of necessary and sufficient conditions?

Replies from: lukeprog
comment by lukeprog · 2011-09-28T20:38:51.451Z · LW(p) · GW(p)

The link I sent you to contains the argument for why it is that many common forms of conceptual analysis are committed to the view that concepts are encoded in the human brain in terms of necessary and sufficient conditions.

Replies from: Morendil
comment by Morendil · 2011-09-28T21:42:22.768Z · LW(p) · GW(p)

That doesn't answer my question. I don't want to rush to cry "logical rudeness objection" here; I suspect I'm asking the wrong question anyway. (But I do feel a bit frustrated.)

The "short-short" version of your post reads: the human mind doesn't encode concepts in terms of necessary and sufficient conditions, we now know; a majority of conceptual analysis treats concepts in terms of necessary and sufficient conditions; therefore conceptual analysis should be considered suspect, to the extent that it relies on untrue facts about the mind. (Did I get that mostly right?)

But "minds encode concepts in terms of necessary and sufficient conditions" and "minds can work with the framework of necessary and sufficient conditions" are two distinct factual claims about minds.

The problem becomes clear when you replace "conceptual analysis" with "math". Mathematicians are not relying to do their jobs on factual truths about how the mind represents mathematical concepts. They are relying to do their jobs on the pragmatic usefulness of the framework of necessary and useful conditions.

Why does that argument work for philosophers but fail for mathematicians? (I have an inkling, which is that they don't look at the same kinds of concepts. But your post makes a muddle of that point, IMO.)

Replies from: lukeprog
comment by lukeprog · 2011-09-28T23:03:51.041Z · LW(p) · GW(p)

Oh, this is totally different than the objection I thought you were making. So thanks for clarifying.

Okay, so:

"minds encode concepts in terms of necessary and sufficient conditions" and "minds can work with the framework of necessary and sufficient conditions" are two distinct factual claims about minds.

Agreed. Like I say:

It is useful in many cases to talk about concepts in terms of necessary and sufficient conditions. I use stipulative definitions like this all the time.

My argument is not against the entire practice of seeking definitions in terms of necessary and sufficient conditions. My argument is against the practice of doing so under the assumption that what we're getting at with such definitions are the concepts in our head, rather than more stipulatively defined concepts that, in many cases, may be more useful than our intuitive concepts anyway.

Replies from: Yossarian
comment by Yossarian · 2011-09-29T05:30:34.470Z · LW(p) · GW(p)

"It's the map and not the territory," right?

I may be way off base here, but isn't the root of this disagreement that lukeprog is saying that our mental map called "conceptual analysis" doesn't perfectly reflect the territory of the real world and should therefore not be the official model. While Morendil is saying, "but it's good enough in most cases to get through most practical situations." Which lukeprog agrees with.

Is that right?

comment by scav · 2011-09-28T14:01:52.237Z · LW(p) · GW(p)

there is a reason dolphins aren't fish

It may not be a very good reason. To quote Wikipedia:

Because the term "fish" is defined negatively, and excludes the tetrapods (i.e., the amphibians, reptiles, birds and mammals) which descend from within the same ancestry, it is paraphyletic, and is not considered a proper grouping in systematic biology. The traditional term pisces (also ichthyes) is considered a typological, but not a phylogenetic classification.

In other words, there are probably fish that are more distantly related to each other than one of them is to a dolphin (or you).

comment by jmmcd · 2011-09-27T14:37:41.203Z · LW(p) · GW(p)

Good point. The initial experiment couldn't even have been carried out without the biological definition of the fish category. If I'm asked to rate various fish as more or less typical, on a scale of 1 to 10, then I'll give very different answers depending on whether 1 means "least typical of all biologically-defined fish" or "mammal" or "flower" or "pair of headphones".

comment by Ronny Fernandez (ronny-fernandez) · 2011-10-22T16:02:06.169Z · LW(p) · GW(p)

[Sorry about the length; my brain didn't want to stop. I'll break it up into a couple comments if need be. ]

What if i interpret the above to show that philosophers should not do psychology? Certainly, figuring out the best way to reason has been as important in philosophy (if not more than) figuring out how we actually reason.

Sometimes philosophers screw it up and confuse a normative claim for a descriptive claim. Perhaps (and I am not committed to this as anything more than a possibility) classical Aristotelian categories are not the way we actually represent categories when we're being lazy or care-free, but when we are trying to reason with the highest certainty possible, Aristotelian categories work best.

Is there no natural human category which is truly binary, has sufficient and necessary causes, and is semantic? 'Electron', or 'quark' seem plausibly Aristotelian to my intuition, and are certainly semantic.

On the other hand, we may sometimes run into a target of inquiry which requires that we form concepts for its study, but there are no Aristotelian categories which do the job, due to the nature of the target. In this case, we should definitely use fuzzy logics and the like; but I wouldn't doubt that the closer your fuzzy sets approximate Aristotelian categories, the easier categorical inference becomes.

So, if you are simply claiming that brains do not always use Aristotelian categories, I agree, and think you have provided sufficient evidence for the claim. But this is not so much a hit to philosophers doing logic/philosophy coming from discoveries in cog-sci, as it a hit to philosophers doing cog-sci coming from cog-sci. However, if you would go on to say that we shouldn't treat categories as Aristotelian-lyish as possible in philosophy/logic, I would say that you have not done enough to show this (not to say that LW doesn't elsewhere).

That philosophers arguing about how the brain works (or any other question about what is going on out there) is fruitless, is not something I needed cog-sci to tell me. Philosophers have been warning against philosophizing when we should just go out there and look, for centuries. Those philosophers that involved themselves in classical conceptual analysis in the way you described, did not fail in that they failed to look at cog-sci; they failed in that they couldn't tell that that was not philosophy time, it was go check the world time; this is a much more general and fundamental mistake class than failing to update on a new piece of evidence.

I like philosophy as loosely cog-sci + maths (at least the epistemological/ontological/logic-ish parts). But I would prefer to add to it the normative part of philosophy. Not how do we reason most often (a question at least as psychological as it is philosophical, if not more), but also what is the best, least error prone, fastest, way to reason? (Psychology tells us how we reason/think, but it doesn't know a damn thing about how we should reason/think.) This is not an unassailable field. We could use the tools we develop in Bayes and Calculus to do much of this work deductively. Modern Bayesian epistemology is already moving in this direction. I might also add to philosophy the contrasting and bridging of how we should reason to how we actually do. Of course, this is not to say that that is all philosophy is. Many philosophers have also been interested in forming descriptions of the way things are put together out there, in the most general sense of the terms, e.g., the field of metaphysics. But anyone who is working in ontology, epistemology, and logic, on both the normative and descriptive parts is likely a philosopher.

so to summarize:

How we reason/think is the domain of Psychology/cog-Philosophy/Philosophy of science. How we should reason/think is the domain of logic/epistemology/ontology/Philosophy of science. Getting from how we do reason/think, to how we should reason/think, is the domain of general philosophy/rationalism. Formulating as complete as possible a cohesive, qualitative, general, and abstract description of reality using as divers a range of information as is available, is the domain of metaphysics.

I do admit however that of all of these fields which I think of as essentially intertwined, Psychology is the only one which appears on the list which is not traditionally thought of as philosophical. This motivates me to further investigate philosophy's relationship to psychology.

I am a philosophy major; and I think of myself as a philosopher. But maybe we should just say that philosophy is a dead field; and "philosophy" a hopelessly vague and outdated 2000+ year old term; but many of its tenants, open questions, and methods, survive (normally in some improved form) in the modern field of LW style rationality.

And BTW: I don't see why you can't just say that an item which satisfies some but not all of the necessary conditions for membership in C, is C-ish; the more it has, the C-isher it is; if it has all of them, it is as C-ish as possible, and if it has none it is completely non-C-ish. This seems to capture both the typicality and Aristotelian categorical views as one hypothesis containing both types of categories and inference. Is there anything in cognitive science to suggest that the brain's categories don't function like that? The term you use to refer to some set of necessary conditions need not be the term you use to refer to that set of conditions next week; as long as any category you use consistently follows the rules described above, for as long as it remains consistent. I'm sure someone else has proposed a similar enough idea, anyone know its name?

comment by TheOtherDave · 2011-09-28T18:34:29.469Z · LW(p) · GW(p)

Before going too far down this road, I'd like some attention given to the notion of approximation.

For example, consider two theories of category formation: CFT1: categories have necessary and sufficient conditions for membership, and to answer "Is X a Y?" we evaluate the truth-value of the conjunction of Y.conditions as applied to X.
CFT2: categories have prototypical members, and to answer "Is X a Y?" we evaluate the similarity of X to Y.prototype.

It's pretty easy to show, using more or less the arguments you present here, that CFT2 is a much better approximation of real human categorization behavior than CFT1.

But it's also clear that CFT2 is nevertheless an approximation, not a full explanation. For example, it's pretty easy to show that human similarity judgments aren't symmetrical -- the classic though somewhat outdated example is "Cuba is more like Red China than Red China is like Cuba" -- which means that "the similarity of X to Y.prototype" is ill-defined: do we mean S(X, Y.prototype) or X to S(Y.prototype, Y) or something else?

But before we discard CFT2 for that reason, we should ask what we're trying to accomplish. Approximate solutions are useful in real-world situations, as long as we keep in mind that they are approximate. Prototype theory is adequate for a wide range of tasks. So we might want to hold onto CFT2 even knowing it's wrong.

Similarly, if we want to discard CFT1, it's not enough to show that it's wrong. We should also have some confidence that it's useless.

comment by wedrifid · 2011-09-27T23:42:50.585Z · LW(p) · GW(p)

Philosophy by humans must respect the cognitive science of how humans reason.

But it need not and should not limit itself to muddled human thinking. Because we learned math.

This post (and the trend of lukeprog's posts) seem excessively focused on redefining philosophy to be default_human_thought++. Which basically makes a mockery of the whole "concepts already have their own fuzzy meaning and trying to redefine them arbitrarily is bullshit" idea.

You can make philosophy take into account cognitive science and the the average thinking habits of that one particular species as well as allow for other less arbitrary kinds of thought. But luke's posts so far have not done that. I consider them interesting but not quite the philosophy or meta-ethics that they present themselves as.

comment by DuncanS · 2011-09-27T23:27:31.498Z · LW(p) · GW(p)

I do think that discovering that virtually all human concepts don't have cartesian definitions is a valuable step. I also can't think of a good way of discovering it other than what was actually done - lots of people try and try, and fail.

Along the way there were some successes too - maths turned out to work OK, and ideas like gravity and so on. The ones that did have cartesian definitions were so useful that we don't regard them as philosophy any more, which is a bit unfair. Philosophy gets to be the diseased bit - the bit that got left behind because nobody could figure it out.

The whole point of the search for cartesian concepts is that you can reason with such definitions. They are the useful ones. The rest - well, it's a swamp, logically speaking. It's worth trying to discover which ideas fit in which category.

Replies from: DSimon
comment by DSimon · 2011-09-28T22:51:06.035Z · LW(p) · GW(p)

The whole point of the search for cartesian concepts is that you can reason with such definitions. They are the useful ones. The rest - well, it's a swamp, logically speaking.

You're mostly limited to such definitions when it comes to deductive reasoning; inductive and heuristic reasoning can work well with concepts that have sharp edges or lots of fuzziness.

comment by MarkusRamikin · 2011-09-27T08:55:17.734Z · LW(p) · GW(p)

I would never have thought an eagle to be atypical as an example of a bird, am I in a minority about this?

Replies from: demented
comment by demented · 2011-09-27T10:54:09.329Z · LW(p) · GW(p)

I feel the same way. In my case, I believe it's because I've been more exposed to eagles and chickens(in my country) than robins(What are those?) so that the former falls within my definition of a typical bird.

Replies from: thomblake
comment by thomblake · 2011-09-27T14:29:59.607Z · LW(p) · GW(p)

robins(What are those?)

A gray bird with a red breast, referring to different species in different countries.

comment by christina · 2011-09-27T07:29:07.830Z · LW(p) · GW(p)

Upvoted for being clear, concise, and easy to read.

comment by cogitoprime · 2021-02-15T00:27:23.093Z · LW(p) · GW(p)

I found myself reading this book today thought I'd remembered someone on Less Wrong posting about it. So here I am.

I think your critique misses some really valid critiques provided by Lakoff of the entire rationalist project.

The sections on Quinne, Kahneman and Taversky(around p. 471) and around pages 15 and 105 are particularly good. 

What your critique misses is that when you use the lens of cognitive science to critique Lakoff's philosophy is that the body of work you are drawing on is already saturated with and informed by the assumptions you are critiquing. In Lakoffian terms, when we look at a brain scan and correlate activity in it to the words someone says, or to their selection of a set of options from a survey, the metaphor that informs and provides the interpretation for those words is already implicit in the words they say or the question asked to them. If I am already operating out of the "brain is like a computer" or "The I is the thinking I, Aristotelian, Cartesian etc metaphors then that is what I will interpret to be present in the brain. 

And if I am already operating in a certain metaphor, and so are the subjects in my psychology study then that is the metaphor that will guide my questions and their answers. 

Cognitive science did not spring out of a vaccum of objective anything. It arose out of a western, greek, post-enlightenment philosophy saturated people and scientists. So what they found, inevitably confirmed what they were looking for. Imagine if you will, if you were to give a tribe of Papa New Guineans, Indians, or any other culture influenced by any other philosophies the same tools of surveys, controlled experiment, and biological neuroscience. Would they have begun by asking questions that would potentially confirm the karmic nature of the mind? The God worshipping nature of the mind? Or perhaps something even more alien to us? And would they not find what they were looking for? And would they not take their findings as confirming evidence of their already present metaphors?

If the replication crisis has proved anything it's that cognitive science is NOT an objective activity that we can conduct devoid of bias or prior interpretation. 

Yes, psychology and neuroscience have something to do say about philosophy,  but you are merely using the assumptions of one metaphor, interpreting the world through it's lens, then using that lens to critique the idea of any lens other than the one you're using. Your conclusion is inherent in your premises and you are merely confirming your biases. 

comment by antigonus · 2011-09-28T08:21:19.663Z · LW(p) · GW(p)

Your crucial, unstated premise is that concepts with fuzzy application conditions can't or usually don't pick out determinate qualities or relations in the world. Because if they actually can pick out such qualities, then those qualities may turn out to be analyzable in terms of others, and conceptual analysts can just take themselves to be analyzing the semantic reference of our concepts rather than the confused jumble of neural events in which those concepts are actually stored.

Furthermore, that premise seems highly non-obvious to me. It impinges upon a ton of different questions. And it falls under the domain of philosophy of language, not cognitive science. So I think your claim that good philosophy just is cognitive science is clearly false.

Regarding your claim that philosophy hasn't produced any non-trivial conceptual reductions, that's a pretty controversial view. In particular, I think there are highly successful reductions of the concept of truth - see the SEP article on the deflationary theory of truth. And it's a lot harder to understand the concept of truth in terms of fuzzy pattern-matching than, say, the concept of socks.

comment by magfrump · 2011-09-27T19:48:12.970Z · LW(p) · GW(p)

I very much think that framing the idea of "cognitive science plus math" as "the embodied mind" and "a challenge to classical western thought" is a good way to attract strong thinkers from previous contrarian fields like feminist studies who have some important insights but have a sharp divide with mainstream philosophy's analytic methods, but may not have the kind of concrete framework for getting back to practical problems.

comment by Morendil · 2011-09-27T15:29:23.633Z · LW(p) · GW(p)

I wanted to begin with a clear and “settled” case of how cognitive science can undermine a particular philosophical practice.

I'm not convinced you have done that; consider:

The problem is that the brain doesn’t process music in terms of musical instruments, but in terms of acoustic spectra, so musicians have been using their intuitions to search for something that isn’t there.

There is a disconnect there. If your "true rejection" of conceptual analysis is only based on implementation-level details of how concepts are stored in the brain, then ISTM that you will reject many other manners of discourse based on that criterion, that you'd otherwise accept (this being the first example that came to mind). Worse, maybe you make an argument that works now but will no longer serve later, if and when our squishy brains are replaced (or extended) by something which does store concepts definitionally (and that may sometimes make sense).

However, your article makes clear (and we already know) that there are sufficient reasons other than "how concepts are stored in the brain" to reject conceptual analysis. Namely, that it leads down blind alleys and fails to clarify what it was supposed to. Arguing about definition typically turns out to be a fruitless endeavor, irrespective of our neural architecture.

The classical view of concepts, with its binary category membership, cannot explain typicality effects.

That's something of a straw man argument; as far as I can see you haven't really tried to repair the classical view of concepts in such a way that it can explain typicality effects. For instance, you haven't considered things like overloading - using the same word to refer to several concepts, which could explain the "fruit" results.

Replies from: lukeprog
comment by lukeprog · 2011-09-27T15:45:26.581Z · LW(p) · GW(p)

Remember that I don't reject conceptual analysis in general, but only the flavor of analysis that assumes the classical view of concepts. I do this for the simple reason that the classical view of concepts if false. This is really not controversial.

as far as I can see you haven't really tried to repair the classical view of concepts in such a way that it can explain typicality effects

Not in this post, which is already too long. If you want a discussion of attempts to fix the classical view, see the source I pointed to: Murphy (2002).

Worse, maybe you make an argument that works now but will no longer serve later, if and when our squishy brains are replaced (or extended) by something which does store concepts definitionally (and that may sometimes make sense).

That would be fine. Then that would be philosophy for machines or philosophy for post-humans or something like that, not philosophy for humans.

I don't think I understand what you're saying with the musical instruments example.

Replies from: Morendil
comment by Morendil · 2011-09-27T16:40:42.935Z · LW(p) · GW(p)

the classical view of concepts if false

If it is, it doesn't seem to me to be false by virtue of not corresponding to how the brain stores concepts. The notion of knowledge as justified true belief can be rejected by appealing to examples that (as far as I can tell) you could come up with even in ignorance of modern cognitive science in general, and typicality effects in particular.

You can reject the proposition "concepts are represented in the human brain in terms of necessary and sufficient conditions" without necessarily rejecting the proposition "it is useful to talk about concepts in terms of necessary and sufficient conditions". You've shown convincing reasons to reject the first proposition, and you've shown some counterexamples to the second proposition, but you also seem to be implying that if we accept the first then the second never holds, and that can't be right.

In fact, it is useful in some cases to talk about concepts in terms of necessary and sufficient conditions (for instance mathematical concepts tend to have this structure).

The examples you give (of typicality effects and so on) are examples of concrete, everyday concepts (bird, fruit, fish, furniture), when really the argument you want to make against the classical view of concepts is about much more abstract concepts (knowledge, truth, justice).

I don't think I understand what you're saying with the musical instruments example.

Maybe that doesn't really make sense. Allow me to retract that.

Replies from: lukeprog
comment by lukeprog · 2011-09-27T16:51:08.855Z · LW(p) · GW(p)

but you also seem to be implying that if we accept the first then the second never holds, and that can't be right.

Ah. I can see how you might infer this from my post, but I definitely do not endorse that if we accept the first than the second never holds.

It is useful in many cases to talk about concepts in terms of necessary and sufficient conditions. I use stipulative definitions like this all the time. But stipulated definitions aren't the aim of "classical view" conceptual analysis.

The examples you give (of typicality effects and so on) are examples of concrete, everyday concepts (bird, fruit, fish, furniture), when really the argument you want to make against the classical view of concepts is about much more abstract concepts (knowledge, truth, justice).

Like I said, my post is already too long, and I provided references if you're interested to read more studies on typicality effects.

comment by windmil · 2011-09-27T13:28:57.837Z · LW(p) · GW(p)

Your link to dualism early on is missing a closing parenthesis. I had to click a whole extra button. Thought I might let you know and save others from this taxing ordeal. Also, in the second block quote, there might be a typo, "philosophy close to the hone," instead of "bone".

Replies from: lukeprog
comment by lukeprog · 2011-09-27T15:31:35.093Z · LW(p) · GW(p)

Fixed, thanks.

comment by teageegeepea · 2011-09-29T00:12:46.217Z · LW(p) · GW(p)

There was a cognitive scientist at Mixing Memory who had a skeptical take of some of Lakoff's views on metaphors and was doing a chapter-by-chapter analysis of one of his books, but then he disappeared off the face of the internet. Still have no idea what happened to him, shame if he died without presumably signing up for cryonics.

Replies from: Unnamed, Nisan
comment by Unnamed · 2011-09-29T23:23:54.560Z · LW(p) · GW(p)

It looks like he is alive and well, but he has left the field and academia. Now he has a job with a state government. I probably shouldn't say more than that, to respect his anonymity. When he was blogging he dropped some clues about his identity which I followed to figure out who he was, and your comment prompted me to google him to see what he's up to now.

Replies from: teageegeepea
comment by teageegeepea · 2011-10-03T00:00:20.864Z · LW(p) · GW(p)

Cool. I tried to do some googling based on first name and possible university but didn't come with anything. Would have been nice of him to have a better last post than one promising content in the future.

comment by Nisan · 2011-09-29T01:38:41.460Z · LW(p) · GW(p)

At least some of that skepticism is well-deserved. Lakoff and his coauthor Johnson deny the existence of facts.

comment by Jack · 2011-09-27T17:20:16.505Z · LW(p) · GW(p)

Worth noting though that pre-cog sci philosophy didn't just take conceptual analysis for granted- there were plenty of dissenters. Also, as others have pointed out quite a bit of the history of philosophy consists of philosophers criticizing established conceptual analyses rather than trying to invent new ones.

Also worth noting that not all concepts are transparently finicky like 'fish' and 'justice'. Also, quite possibly: species, mass, time, object etc.

comment by Carinthium · 2011-09-27T09:48:59.239Z · LW(p) · GW(p)

To be fair, definitions in the conventional philosophical sense do have their uses- they can reduce or eliminate ambiguity when they can be adopted in practice (in law, for example). A theoretical humanity which did use the philosophical version of definitions would probably be more rational.

Replies from: endoself
comment by endoself · 2011-09-28T04:10:16.242Z · LW(p) · GW(p)

That theoretical humanity would be so different as to make the comparison pointless, since humans ofte

comment by Jim Stone (jim-stone) · 2020-05-22T19:19:53.760Z · LW(p) · GW(p)

A fairly influential philosopher named "Wittgenstein" made essentially this critique 70 years ago. Many philosophers still do conceptual analysis in terms of necessary and sufficient conditions, but few think the project will ever work perfectly for any natural language terms (though a 99% accurate categorization rate is often completely realistic even with only a few conditions). Even fewer think this is the way the brain learns and stores concepts.

Prototype theory is a much better theory of how we learn concepts, but it doesn't lend itself to legal stipulation very well. For that you need necessary and sufficient conditions. So the old Platonic practice still has some value, even if we no longer think the conditions define an objective class boundary or form the core of our unconscious sorting algorithms.

That's not to defend all such analysis. Trying to define "love" or "art" in terms of necessary and sufficient conditions is probably going to be a fool's errand resulting in either a low rate of accurate categorization or a long, unwieldy list of conditions), as was the Gettier-inspired "search for a 4th condition" cottage industry in epistemology. (Though I think some things were learned in the attempt).

comment by asr · 2011-10-03T06:12:13.529Z · LW(p) · GW(p)

I don't see that we can get away from conceptual analysis so easily. There are a whole lot of cases where we make commitments to particular doctrines, beliefs, promises and so forth, as expressed in words.

Law is all about using articulated definitions and natural-language rules to decide disputes. And we find ourselves using terms like "cause" and "knowledge" all the time in law. Such terms also show up in daily life -- if I tell somebody I will do the best I can, it's rather important to me that I understand how they're likely to understand that phrase, and what sort of concepts that will or won't call to mind.

Certainly, different people are going to use terms differently, and even the same person will use them differently in different circumstances. But if I'm on a jury and somebody asks me whether the defendant had actual knowledge of the crime, and then asks me to justify my answer, I need something to fall back on other than "I put all the facts into my brain and the the neuron for 'yes' was more active than the one for 'no'."

In the Republic, Socrates isn't arguing 'justice' just because he feels like showing off -- he needs to explain, on the one hand, why challenging tradition isn't the kind of injustice that people should be killed for -- and on the other hand, why his pupils shouldn't overthrow the government for personal gain. Is there a better tool than definition and conceptual analysis for this social purpose?

Sure, definitions are often imperfect for capturing what I mean by a given term -- but I don't have anything better to use, and I think it would be a decent use of somebody's time to think hard about which definitions seem to pick out useful sets in ways that don't conflict too violently with how I previously was using the word. And It's also a good use of somebody's time to figure out where the sticky bits are with a definition and how we might refine the definition to do the right thing in that messy corner case. I do actually sometimes have to use imprecise terms precisely and consistently and pointing out that brains don't work that way doesn't remove the real social need.

Replies from: billswift
comment by billswift · 2011-10-25T17:45:44.232Z · LW(p) · GW(p)

You seem to be confusing definitions and concepts. Definitions are our best verbal description of our concepts, but they are not concepts. I think this is best thought of as an example of a map-territory confusion.

Also, many fields, especially your example of legal language, purposely limit meanings of their definitions for the purpose of greater precision and clarity, but more generally even definitions are fuzzy around the edges, though less so than concepts.

comment by thomblake · 2011-09-27T15:02:18.071Z · LW(p) · GW(p)

For reference, there is a field of study which purports to do this sort of thing, Neurophilosophy. Not to be confused with Neuro-philosophy from the Schrodinger's Cat Trilogy, which is merely the study of philosophy using brains.

Replies from: lukeprog
comment by lukeprog · 2011-09-27T15:48:36.849Z · LW(p) · GW(p)

Yup. Long-time fan of the Churchlands. Though, there are lots more who do cognitive-science-informed philosophy without calling themselves neurophilosophers.

comment by byrnema · 2011-09-27T18:32:08.972Z · LW(p) · GW(p)

In its standard form, conceptual analysis assumes the “classical view” of concepts, that a “concept C has definitional structure in that it is composed of simpler concepts that express necessary and sufficient conditions for falling under C.”

I suspect that I am a bit slow on the uptake here, but I'm not sure what's not true. (Something was thought and now we know it isn't so?)

On the one hand, I understand that a set might be defined as a collections of objects satisfying just a handful of necessary and sufficient conditions, and that humans often think about simple sets that are defined in this way. For example, the set of primes is particularly easy to define with just a couple conditions.

It seems a bit awkward at first, but I suppose I could also think of any human-defined set as a 'concept' for the person considering the set. So for example we have a 'concept' of prime number.

What about the other direction? Do all human concepts map to sets? In which case, are these sets explicitly defined in our brains by a set of necessary and sufficient conditions?

I would have guessed the answer should be "yes" for any concept that we're certain actually maps to a set (like the set of 'fish'). Even if we do have fuzzy boundaries for a category; this fuzziness could reflect our uncertainty about which set we're pointing to or uncertainty about the rules that make the set. (Doubtless I'm echoing some 15 year stale argument...) Explicitly, uncertainty whether an olive is a fruit could reflect either one of the following problems:

  • The person isn't certain which set called "fruit" you mean. Do you mean fruit1 defined as all plant organs that have seeds, or fruit2 defined as plant organs that are exceptionally sweet?

  • The person isn't sure what the conditions are for being in the set "fruit" and supposes that olives could belong to the set with some probability. That is, the uncertainty reflects their knowledge about the set rather than the fuzziness of the set itself.

Are we discussing whether a set maps to something 'real' in a person's brain? It seems natural to think of objects belonging to or not belonging to a set based on criteria -- is this not what my brain does when it thinks about a set like 'fish'? Does my brain do this sometimes? Or is the question whether my brain does this always?

Finally, is it possible to have a concept without defining a set? ... Perhaps if a concept is just pattern matching? For example, let's suppose I see the sequence 2, 4, 2, 4. I have a 'concept' of a pattern that it is alternating 2s and 4s. Have I necessarily defined a set (for example, the set of patterns ABABAB...) to predict the number that comes next?

In summary, I would like to understand more clearly the conclusion regarding what cognitive science has learned about the relationship between sets and concepts.

Replies from: DSimon, None
comment by DSimon · 2011-09-27T23:42:44.707Z · LW(p) · GW(p)

Even if we do have fuzzy boundaries for a category; this fuzziness could reflect our uncertainty about which set we're pointing to or uncertainty about the rules that make the set.

But there aren't any rules out there in the universe that define concepts; the concepts are in our heads. You could certainly argue for picking a strict definition (i.e. figure out exactly where a "small pile of sand" becomes a "heap of sand" using T-Rex's method), and convince everyone that it's the best definition... but that's just a different (though possibly better) concept, not a more accurate representation of a set rule in either sense of the word "set".

Focusing on the boundary between member and non-member is not typically a useful practice anyways (though it can be fun, or meta-useful as when discussing cog-sci itself). As pointed out in the OP, the parts of a concept people use the most are the ones that contain the most obvious members of the set. If you're squirreling around in the fuzzy border-lands of "is it a heap? is it not?", then you need to go back and figure out what question you're actually trying to ask, instead of arguing about a concept that doesn't have much immediate usefulness.

Replies from: byrnema
comment by byrnema · 2011-09-28T21:50:19.329Z · LW(p) · GW(p)

The set of 'heaps' is an interesting example of a set.

Do you suppose, given our sense of pleasant dissonance with respect to trying to identify the smallest number of elements in a heap, that one of the necessary qualities of a 'perfect' (most typical) heap is that a person doesn't know how many elements there are?

Or, rather ... thinking about it further -- being a 'heap' refers to how it appears to have been formed, with items randomly piling on top of one another. And at first, it is difficult to imagine 'heaping' of just one thing. Yet I suppose a person could heap themselves on the floor, and then given that context, it suddenly makes sense for there to be a heap of even one thing. Say, a pair of pants thrown carelessly on a chair.

So just a few moments were needed to trace the correct conditions for being a heap, and then it makes sense. The set of 'heaps' is defined based on the verb (items belong to the set based on the believability that a configuration may have been formed in such a way) and not on the number of elements.

Replies from: DSimon
comment by DSimon · 2011-09-28T22:38:49.209Z · LW(p) · GW(p)

Hm, that makes sense. For a similar fuzzy-quantity-border problem to what I was thinking of originally, though, consider these other nouns:

  • Crowd
  • Bunch
  • Drop (of liquid)
  • Bundle
  • Shard

And so on. And that's not even getting into adjectives where the fuzzy-border property is even more pronounced (tall, short, big, small, heavy, light).

Replies from: byrnema
comment by byrnema · 2011-09-29T22:53:55.526Z · LW(p) · GW(p)

Some of these words I have 'concepts' for, others I don't. If I don't have a concept for the word, it seems to be understood just at the association stage -- I can only come up with a list of contexts where I would use the word.

If a set is defined by necessary and sufficient conditions, then having an association means that you have some sufficient conditions but you don't know the necessary conditions. That is, you can identify a region of ideas that are in the set but you don't know where to draw any boundaries.

At this sufficient but unknown-necessary-conditions stage, you don't really understand what the word means (you don't have a well-defined concept) because you need to be able to delineate something that is not in a set in order to really understand anything about what a set is. I agree that not knowing the exact boundary may not be important - but it is important that there is some known interior and some known exterior. My conclusion is that any concept that is understood at any elementary level must have both sufficient and necessary conditions.

Regarding those words which are more associations than concepts, on NPR today the following sentence stuck out as relevant:

At 1:18 Daysha Eaton says:

'headwaters is kind of a political word to a lot of people around here; nobody really knows what that means or wants to agree that is where the mine is but --'

Hearing this statement sort of cemented a hypothesis I was considering (in a draft reply to this comment this morning) that many words don't have real concepts behind them because their main use is to signal something. For example, I would rarely use the word 'crowd' when thinking to myself. I would only use the word 'crowd' hyperbolically, to indicate to someone that they should imagine more people than they would otherwise expect. In other words, the word augments a concept of how many and the meaning depends on a particular context of myself and a listener.

I think all words have some sort of concept behind them (for example, 'headwaters') but since they're not literally used to convey that concept very often, a speaker of that language can be confused about or forget the concept behind the word.

In an internal dialogue, I would only use the word 'crowd' if there was literally jostling of elbows; I think that is the "original" concept for me. What does 'crowd' mean to you?

Replies from: DSimon
comment by DSimon · 2011-09-30T22:06:10.680Z · LW(p) · GW(p)

Hm, I see where you're coming from, and I agree with you so far as it goes when mapping between sets and concepts. But to switch back to talking about words instead of concepts, I am not sure why you (seem to be) going with a set-based approach. Thinking of words as fuzzy regions seems more useful to me when trying to analyze how words are used, and when trying to figure out how words ought to be used.

As shown in the OP, people tend to use words not as strict sets but as regions in thing-space with more and less typical members (i.e. a bluebird is a more typical member of "birds" than a penguin or an ostrich). Though some situations will demand strict borders, in general the fuzzy region approach seems more beneficial, because it allows for quick low-level inductive reasoning. If you know that a bird is typical in bird aspects A and B (say, it has feathers, and it has a large wingspan), then you can more confidently predict that it's typical in bird aspect C (it can fly). It's less brittle to use probabilities in situations like these than strict boolean sufficient-or-necessary values.

Replies from: byrnema
comment by byrnema · 2011-10-01T02:04:06.700Z · LW(p) · GW(p)

Though some situations will demand strict borders, in general the fuzzy region approach seems more beneficial, because it allows for quick low-level inductive reasoning

You've convinced me on the importance of fuzzy sets. I'm sure quick low-level inductive reasoning is of vital importance, but even more immediately it seems that flexibility in meaning is useful for communication. I don't have to hunt for the exact right word, many approximate words will do, and I can describe new concepts by stretching an old one.

However, when you write:

It's less brittle to use probabilities in situations like these than strict boolean sufficient-or-necessary values,

I don't see how sufficient-or-necessary defined sets don't allow fuzziness. For example, saying that a bird must have feathers (a necessary condition) and that a bluebird is a bird (sufficient condition) then there is still plenty of gray area for penguins and ostriches.

I think of sufficient conditions as being associations. We learn that penguins, ostriches and doves are all birds so those are sufficient conditions. A necessary conditions puts an edge on your set -- staplers aren't birds because they don't have feathers.

On the other hand, a stapler that looks like a bird could be thought of as a bird to oneself. If anything has enough of the characteristics of a bird you could consider it one. For example a small plastic object with no feathers whatsoever is a 'birdie' because it moves like a bird.

So now I'm leaning towards agreeing. We just have associations, things are like other things if the characteristics overlap, and there aren't any necessary conditions for deciding if something is 'like' something else.

Replies from: DSimon
comment by DSimon · 2011-10-01T04:22:07.248Z · LW(p) · GW(p)

[...]it seems that flexibility in meaning is useful for communication. I don't have to hunt for the exact right word, many approximate words will do, and I can describe new concepts by stretching an old one.

Yes, strongly agreed. This idea makes me want to think of adjectives as tugging a concept-defining region of thing-space in a new direction.

That works especially well in languages like Lojban, where adjectives and nouns are not distinguished from each other. For example there is "blanu" which means "x is blue" and there is "zdani' which means "x is a house", and you can say "blanu zdani" (blueish-house) just as easily as "zdani blanu" (houseish-blue).

[...]saying that a bird must have feathers (a necessary condition)

That actually is a good example of a brittle requirement, in ways that are even more directly problematic than shuttlecocks and bird-shaped staplers. What about a plucked chicken? What about a duck that, due to a genetic disease, never had feathers? What about (NOTE: This example isn't valid re: real evolutionary history) a member of an intermediary species between pterodactyl and modern birds?

Replies from: Desrtopa, byrnema
comment by Desrtopa · 2011-10-01T04:33:00.655Z · LW(p) · GW(p)

Not that it particularly affects your point, but pterodactyls are not genetic precursors to birds (they split off before the clade Dinosauria,) and feathers predate the first true dinosaurs capable of flight.

Replies from: DSimon
comment by DSimon · 2011-10-01T05:04:16.511Z · LW(p) · GW(p)

Whoops, didn't know that, thanks.

comment by byrnema · 2011-10-01T11:28:43.195Z · LW(p) · GW(p)

That actually is a good example of a brittle requirement,

Yeah, good point. I'm entirely convinced. Even for an apparently straight-forward category like 'bird', there's not a single necessary condition you can point to. Even if there are some examples of categories with necessary conditions (I don't know), this is evidence that the necessary conditions aren't an intrinsic part of the way we structure a concept.

comment by [deleted] · 2011-09-27T22:01:52.092Z · LW(p) · GW(p)

Is there a concept of beauty? Is there a set containing everything that is beautiful and nothing that is not?

Replies from: wedrifid, byrnema, lessdazed
comment by wedrifid · 2011-09-27T22:52:18.614Z · LW(p) · GW(p)

Is there a concept of beauty? Is there a set containing everything that is beautiful and nothing that is not?

Yes. Next question.

comment by byrnema · 2011-09-27T23:46:53.639Z · LW(p) · GW(p)

I don't think beauty is a 'good' concept in every day usage. Perhaps this is because it doesn't map to a set for me!

On the other hand, in a context where the word 'beauty' is being used in a specific way that I could get a handle on, I can imagine knowing for that context what concept 'beauty' is pointing to and being able to identify the set.

Note this doesn't have anything to do with beauty being subjective or a 2-place word. If I know what concept to tie 'beauty' to, I could then know what set of things are beautiful. Note also this would be a very specific set (say, 'Beauty2035JL') since there would probably be many similar sets that I could immediately encounter.

Replies from: None
comment by [deleted] · 2011-09-27T23:52:57.827Z · LW(p) · GW(p)

Then, if I'm not mistaken, the point being made here is that our idea of beauty is not a single, specific concept, but a vague feeling that could correspond to any of myriad different concepts. Therefore, we can't give necessary and sufficient conditions for a thing to be beautiful—unless, as you mention, we determine which concept of beauty we're referring to.

(It sounds to me like "concept" means one thing when lukeprog says it and something else when you say it.)

Replies from: Morendil, byrnema
comment by Morendil · 2011-09-28T06:49:04.827Z · LW(p) · GW(p)

All that may be true - my beef is that we don't get that from cognitive science! Plain old philosophy gets us there.

comment by byrnema · 2011-09-28T01:20:20.635Z · LW(p) · GW(p)

Oh, my point was that (for me) 'beauty' isn't a concept but just a token that gets bandied about. I have some faith that there is a concept behind the token when other people use the word, and I have used the word in contexts where I've estimated that the concept is somehow 'close enough' to what might be meant by beauty ... but simultaneously I'm aware I'm not communicating any specific concept and probably not communicating the one I intend.

Even a vague feeling could point to a concept. If you have a vague feeling about 'beauty' that could correspond to any of a myriad of different concepts, then that is probably a set. (The set of all things you do tie or would tie to that feeling at that moment.)

I'm liking the idea that any concept must point to an actual set, though, and I think I'll go with that definition in my thoughts from now on. So: a 'concept' is something in my brain that defines a set.

... But now it is time to go back to lukeprog's post and check what he meant by concept and start using that token with his meaning in this context..

Replies from: None, scav
comment by [deleted] · 2011-09-28T01:46:41.996Z · LW(p) · GW(p)

Even a vague feeling could point to a concept. If you have a vague feeling about 'beauty' that could correspond to any of a myriad of different concepts, then that is probably a set.

Or it's any of myriad different sets.

But yes, trying to standardize our terminology will probably be helpful!

comment by scav · 2011-09-28T14:21:00.900Z · LW(p) · GW(p)

a 'concept' is something in my brain that defines a set

I don't think you do have anything in your brain that defines a set, except when you formulate a precise verbal or other description of an actual mathematical set. Which is not what you're doing when you ponder a concept like beauty.

In the instant that you think about whether something is "beautiful" the concept is a transient configuration of neuron state that at one particular point in time associates something in your memory or perception with the word "beautiful". In which case it's not any more meaningful than any other random association that happens to occur.

How would you tell this is what is happening instead of your brain containing a well-defined set? First note, it is not going to be consistent - you will change your mind. It's also not going to be complete - there will be things where you don't have any particular feeling about whether they are beautiful. Also, you will be able to say whether things are beautiful even when they aren't a discrete thing and have boundaries just as fuzzy as the concept you relate them to (e.g. a beach with surrounding landscape). And you will do this without even noticing.

Replies from: byrnema
comment by byrnema · 2011-09-28T15:21:25.004Z · LW(p) · GW(p)

In a way, I completely agree with you. I agree with you in the sense that I think you are slicing reality in exactly the right way, and any remaining disagreement is just definitional.

transient configuration of neuron state that at one particular point in time associates something in your memory or perception with the word "beautiful". In which case it's not any more meaningful than any other random association that happens to occur.

Agreed, we should distinguish this random-association-type thought from the brain process (however it may be done) of mentally defining a set.

What I want to do next is say that at this random-association stage of a thought process, you don't have a concept. Ithink you haven't seized a concept unless you've defined the set. The concept, if it is lurking there, hasn't been understood and hasn't been 'owned'.

the concept is a transient configuration of neuron state

Yes, I think this transience is what makes it difficult to recognize and discuss concepts correctly. For example, in several places 'concept families' have been mentioned, as though the primary object is a fuzzy set and at instances of thought we're picking out particular sets from this fuzzy family. I see this reversed: sets are always specific, but our thoughts transition so fluidly from one set to a nearby set that we imagine there is a single larger fuzzy set that these sets are coming from.

For example, when we think of the set of fish, we are likely to first consider something like 'the set of animals that look just like Nemo, but with any color variation'. (This is what was referred to as a 'most typical' member.) Then a few seconds later we remember sharks are also fish and throw them in. Both sets are called 'fish' in our minds from one moment to the next, but they were different sets and our brain can distinguish them, we just didn't bother to track the differences as our concept of 'fish' evolved. So a thought process will evolve a lineage of sets SetFish1-->SetFish2-->SetFish3 over the course of a few seconds.

I think the fact that our brain can easily distinguish them all (via necessary and sufficient conditions, if not actual single-word linguistic tokens) is evidence that we understand the individual sets first, and the understanding (concept) of a 'fuzzy set family' comes from the observation and generalization of this lineage.

Note: I mean, this is what I think now. I'd be interested in a different paradigm for how my brain understands a concept and/or a set.

comment by lessdazed · 2011-09-27T22:14:53.128Z · LW(p) · GW(p)

http://lesswrong.com/lw/oi/mind_projection_fallacy/

Replies from: DSimon
comment by DSimon · 2011-09-28T22:54:45.605Z · LW(p) · GW(p)

I'm voting this comment down, not because I don't think the link is relevant or useful (because it certainly is!), but because in this context linking to it without any explanation comes across as rather rude, and is less likely to be clicked on by people who would benefit from reading it.

comment by Randolf · 2011-10-03T22:29:29.797Z · LW(p) · GW(p)

Eagles are lonely hunters who don't spend much time with other birds, are quite rare in numbers and only live in the wilderness. Robins however, are often seen near other birds, basically live everywhere and are also large in numbers. So mayhaps people choose Robin as the better disease spreader simply because Robin probably is the better disease spreader.

There are very many factors that may affect this kind of a test.. What do you think about the following?

If you were told that planktons had caught a disease, how likely would you think it would spread among other sea species? Now suppose fish had caught the disease?

Now, plankton definitely isn't the most obvious sea species, while fish is. Yet I dare suspect that people would select plankton as the better disease spreader simply because they are everywhere. I'm not certain though.

So, mayhaps, because people know how a disease spreads best among large populations, they tend to select a species which is large in numbers, or which they think to be large in numbers. Maybe people select the typical member of the group as the better disease spreader (when asked to choose between two), simply because such typical member is also likely one that people see often and hence they also believe it is great in numbers.

In essence, my point is: Maybe, when people are asked which one of two species of a given group are better disease spreaders, they select the one which is typical in the group of potential disease spreaders of species, and leave the one which is more atypical out.

comment by [deleted] · 2011-10-02T06:00:17.058Z · LW(p) · GW(p)

I find George Berkley philosophy of immaterialism quite interesting to the extent of welcoming an informed approach to the philosophy of mind. He further contended that " objects exist independently of mind is not testable or provable by the scientific method, because all objects we would wish to examine must enter our awareness in order to experiment on them."

Although I am a firm believer that philosophy is just the tools we use to understand our own limited conditioning and environment (adjusted to a moment in timeframe) tends to lean more towards the idea that we are never truly close on explaining anything in regards of absolutes (truths or universals) but rather contribute into the developing of new forms of information processing. As we evolve or find new methods of stretching those mental faculties (which are the core essence of the foundations of our ideas of experience) we obtain new philosophical perspectives along the way and dispose of that which once was "truth" or "knowledge" passing forward as "belief". The Ancient greek pre-socratic Gorgias of Leontini had a strong philosophical argument, as the father of paradoxism he revolutionize our ideas of reality, challenged the essence of being, epistemology and ontology, which further pushed Plato to examine the immaterial world of Ideas, intangibles and abstractive forms, hence his works on platonic realism or platonism.

“I think good philosophy basically just is cognitive science, plus math.” Mathematics is a subset of the metaphysical abstractive conceptualization occurring by the mechanism of the brain and its relation to the physicalist body (and that with space which is annexed with time by the pre-disposition of perception). Therefore to a degree, our experience are actually paradoxical in basis, The contingency issue arises by intending to describe what is real with something that is originally limited into its constituent parts. The corporeal(body) generates the senses that feed information to the brain creating the immaterial(mind), the immaterial creates the idea of the physical through abstraction of the sensory perception (hence giving it definition and property); the two are essential for supporting the existence of experience( subjective empiricism) under limited parameters which only exit seams to be an upgrade-evolution of the senses. Since our capabilities of communication have advanced over the different levels of human development: from sight-feeling-hearing, to thought-forms, to signaling, to pictography, to linguistics, to written forms, to history and internet we have further organized our individual ideas into collective frame in which we become more interdependent of our connections than never before and our mental processing and understanding enhance exponentially, each time we enhance the way we view the physicalism under direction of the immaterial(mind). So is the mind what defines the body/object (experience) or the body/objects defines the mind? An Interesting paradox. Without the mind functions(come) what happens to the body? without the body functions(death) what happens to the mind?

An example of this would be a debate on "Qualia". Or the refutation of epistemological knowledge- "The sky is blue! " -is the sky blue? What makes the sky seam blue? What is blue to an innate blind person who never experienced that information processing? and how it is described from the experience of another makes it blue?...How much of "knowledge" is "belief" and its vise-versification?

comment by tetsuo55 · 2011-09-27T20:37:13.676Z · LW(p) · GW(p)

While reading this i got the idea that this article is attacking the current standards for “how to order things in nature”

I have two things to say in response:

  1. Direct Instruction and i guess the scientific method in general both claim/prove that you can cut reality at the exact joints required to make only those hypotheses that explain the thing available. (So we can come up with unfalsifiable set of data on what a “red” is).

  2. Only real data about a thing should be stored, flat things that say something concrete about the thing. Categorizing the thing in a scale with other similar things, meta infomation, should be done for each unique question posed. For example: We store all the facts about dolphins in a database(like has lungs, has fins, has sonar), but not labels(is mammal, is fish, etc..). You can then query the database for things you want, for example, all things with fins, and then dolphins will fit the meta label, do the same thing for all things with gills and the dolphin will not be in the result.

In short: Labels are situational and should be clearly defined as to what kind of characteristic they scan for and what they’re use is.

As a knowledge designer. I would store all the simple facts in a database, and then use a conditional script to select the best label for any given query at that moment. This means i throw away the current concept of “fish” whatever it is, and make it concrete by asking: “What specific characteristics are you interested in for this particular query?” (We can decide on common queries that we want to make international standards, like we have now for fish, but we need to make clear in what situations that standard even means anything)

Replies from: DSimon
comment by DSimon · 2011-09-28T05:07:33.830Z · LW(p) · GW(p)

We store all the facts about dolphins in a database(like has lungs, has fins, has sonar), but not labels(is mammal, is fish, etc..)

So if I follow: To record a fact like "has lungs" you first have to define "lungs". And then you run into the same problem: if you're not recording labels, then you have to identify lung objects from non-lung objects by specifying descriptors (has cell structure A, processes oxygen, etc.), and then you have to define those descriptors, and pretty soon your query for dolphins (or chairs, or oranges, or Libertarians) is a huge-ass quantum probability distribution which is a pain to deal with.

To avoid having to write that huge query, you allow the user to specify conditionals in terms of other conditionals which were defined in advance. That gets you the same query in the end, but in a way that's a lot easier for the user.

That sounds fine to me; in fact, it sounds like reductionism, which is very handy stuff indeed. However, it doesn't address the issue in the OP, which is that human concepts tend to act like fuzzy values, not like strictly delineated sets. Let's take a naive query high-level bird query: feathered vertebrate, flies, reproduces by laying eggs. That describes bird characteristics which are very useful, and which can't really be discarded; however, it excludes things that we commonly consider to be birds, such as parrots that have had their wings clipped, and penguins.

Human brains can (and often do) apply labels to objects strongly or weakly. Your query language has to be similarly heuristical if you want it to be useful for all or even most of the questions humans tend to ask.

Replies from: tetsuo55
comment by tetsuo55 · 2011-09-28T07:48:33.978Z · LW(p) · GW(p)

Yes i think you understood what i meant. It is a recursive system where you keep defining each thing in detail, hacking at the edges of reality until any hypotheses left are all equally valid.

It is hard work, and it is possibly too much for the brain to handle, but afaik, other than the handful of Direct Instruction studies nobody has done any really big tests. the tests done on the small scale where highly successful though.

I obviously program this stuff in a specially designed tool, which makes it intuitive and easy to keep defining the definitions deeper and deeper (and you basically end up with laws of nature at the bottom, like the math explaining gravity etc..)

I guess what i am trying to say is, that the foggyness of concepts in our head can be a result of our teaching methods and not a flaw of the mind per-se, my only evidence being the fact that we can make tools that help us clear up the fog, and that using these tools/methods to teach people seems to have a big effect.

Replies from: DSimon
comment by DSimon · 2011-09-28T15:41:17.430Z · LW(p) · GW(p)

But, the fuzziness isn't necessarily a flaw at all; having more and less typical examples of a category has shown itself to be pretty handy, since we can use the level-of-typicalness to influence how confidently we can make correlations ("birds lay eggs and have feathers and fly, X has feathers but doesn't fly, so I'm only pretty sure it lays eggs").

I think that would be a valuable feature in a fact database.