"What Is Wrong With Our Thoughts"

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-17T07:24:30.136Z · LW · GW · Legacy · 109 comments

Contents

109 comments
"But let us never forget, either, as all conventional history of philosophy conspires to make us forget, what the 'great thinkers' really are: proper objects, indeed, of pity, but even more, of horror."

David Stove's "What Is Wrong With Our Thoughts" is a critique of philosophy that I can only call epic.

The astute reader will of course find themselves objecting to Stove's notion that we should be catologuing every possible way to do philosophy wrong.  It's not like there's some originally pure mode of thought, being tainted by only a small library of poisons.  It's just that there are exponentially more possible crazy thoughts than sane thoughts, c.f. entropy.

But Stove's list of 39 different classic crazinesses applied to the number three is absolute pure epic gold.  (Scroll down about halfway through if you want to jump there directly.)

I especially like #8:  "There is an integer between two and four, but it is not three, and its true name and nature are not to be revealed."

109 comments

Comments sorted by top scores.

comment by [deleted] · 2009-05-17T10:47:35.885Z · LW(p) · GW(p)

Stove himself concludes that his "nosology" is probably not worth compiling. I think he's actually just using it to make the same point you've made by mentioning entropy. He considers it in order to justify rejecting it.

He then does something similar with the possibility of figuring out individual cases, rejecting it because the findings won't be generalizable.

Then he gets to what seems like his main point: getting rid of almost all philosophy because it's crazy.

(I thought the piece as whole was much funnier than the list. It's a tongue-in-cheek version of bending over backwards to avoid accusations of dismissing something crazy out of hand.)

comment by RolfAndreassen · 2009-05-18T17:03:57.368Z · LW(p) · GW(p)

Possibly Stove intended this only as an extended Take That to philosophers he dislikes; but it seems to me that he is a bit too dismissive of his own project, the 'nosology'. Without wanting a Fully General Counterargument, I think it might be useful to have a set of, say, five or six different classes of erroneous statements; and I also think Stove is too eager to insist on the singularity of each of his examples. For example, he states that the objection "not verifiable" cannot be applied to his example 8; I don't see why not. Anything whose "name and nature are not to be revealed" has just been declared unverifiable, no? Similarly 3 through 7 look pretty unverifiable to me.

Then he has some examples further down the list which look reasonably testable, such as 13 : "3 is a lucky number". One could easily do an experiment on this by submitting lottery tickets with and without 3's filled in; and as for 14, I think a simple "false-to-fact" would suffice to dismiss it.

So far then there are three classifications: False to fact, contradiction, meaningless through having no connection to observation. We may need a fourth to cover such statements as 26: "The tie which unites the number three to its properties (such as primeness) is inexplicable". This seems somehow vaguely related to observation, in that there does seem to be something called three which has the property of primeness, and nobody has really explained the tie between triples of objects and these properties. (It is perhaps not strongly coupled to observation, but I hesitate to dismiss it completely on that ground.) I suggest a fourth classification of 'uninteresting' or 'unfruitful': A proposition which, when adopted as an axiom, yields few or no deductions, is unfruitful. One might also call it the 'So What' error: Making statements which even if true are not useful to know.

There does seem to be some overlap here; for example, Stove's 25: "Five is of the same substance as three, co-eternal with three, very three of three: it is only in their attributes that three and five are different." This looks to me quite unverifiable, but even if it were true, So What? What conclusions or prediction would you draw from this?

Contrary to Stove, I think these four will cover all his list: False to fact, contradiction, meaningless, and So What. I am not certain, however, whether this insight is useful.

Replies from: cousin_it
comment by cousin_it · 2009-05-19T11:33:38.822Z · LW(p) · GW(p)

I'd unify your "So What" with "meaningless" into a single category "does not constrain observations". Math passes the test inasmuch as it constrains observations about outcomes of proof checking.

But now some people will complain (are already complaining) that we reject the majority of humanity's thought.

Replies from: RolfAndreassen, Annoyance
comment by RolfAndreassen · 2009-05-27T15:57:57.238Z · LW(p) · GW(p)

Again, it does seem observable that nobody has explained why three is prime and four isn't. (I'm not sure you can actually use 'why' in an intelligible way here; possibly I'm being confused by non-mathematical language applied to math.) It's not an observation I would expect anyone to care about, and possibly it may be the equivalent of nobody having seen something invisible; but it does seem to make a statement that could in principle have gone the other way.

Replies from: thomblake
comment by thomblake · 2009-05-27T16:29:05.329Z · LW(p) · GW(p)

I agree that I'm not sure how you're intending to use 'why' here, and I'm pretty sure there's a good answer for any particular meaning.

To answer the question in a possibly unsatisfactory way, 3 is prime because it is a natural number which has exactly two distinct natural number factors, whereas 4 is not prime because it has more than two distinct natural number factors.

comment by Annoyance · 2009-05-19T14:25:14.007Z · LW(p) · GW(p)

What humanity does isn't "thought", by and large. Not in any meaningful sense. It's mostly the expression of prejudices combined with associational triggers and repeating what others say.

Part of becoming an effective thinker is recognizing that unpleasant realities need to be acknowledged even when we'd prefer they weren't the case. For people living in this time, in this place, one of those truths is that we're surrounded by blatant stupidity. Even worse, we're blatantly stupid a lot of the time.

Deriving those conclusions from the evidence, and then acknowledging their validity, is one of the basic necessary steps to becoming better. No problem can be (expected to be) solved if we deny its reality.

comment by Matt_Simpson · 2009-05-17T22:53:41.372Z · LW(p) · GW(p)

I have to say that the positivist critique that "it's all meaningless" is seductive and it may well be correct - it feels like the words have meaning, but when you try to parse the sentence the feeling quickly disappears.

The problem is, this isn't very useful for talking about specific errors and how to avoid them. Many of the statements on that list looked rather meaningless to me, but to someone who believes in one of these statements, there are some underlying beliefs or confusions that need to be addressed before the "meaningless critique" will have any effect. At this point, pointing out the meaninglessness of their pet statement becomes entirely superfluous.

Replies from: Jack
comment by Jack · 2009-05-18T01:33:03.916Z · LW(p) · GW(p)

There is a pretty innocent reason for why those passages look meaningless– they're all jargon filled when you don't know what the jargon means you will likely fail to understand what the passages mean. A paper on quantum chromodynamics is going to look meaningless to someone who doesn't know what quarks, quanta, flavor symmetry, gluons, hadrons, chirality etc. refer to. Similarly, I assume most people here have no idea what Plotinus means by "Being", "Essence", "Intellectual-Principle", "form" etc. I've done course work on Neo-Platonism and I don't remember what all of that was about. The same goes for the other passages.

Now Plotinus is particular might still be meaningless since some of that jargon is actually meant to refer to real things that he thinks exist. And insofar as he is referring to non-existentials whether or not the passage is meaningful depends on your philosophy of language (it is either false, meaningless or non-propositional).

Occasionally you find an analytically trained philosopher working on continental subject matter and they tend to assure me that the jargon and unconventional usage actually DO mean things. What does happen, I think, is that the jargon and unconventional language gets abused by stupid people who don't really understand the original philosopher but try to use their language. Since the language is so hard to parse in the first place it ends up being pretty easy for a charlatan to survive. Particularly if the charlatan isn't actually working in a philosophy department where there are people to challenge her.

In that vein, I don't think "bad continental philosophy" consists in Foucault and leading figures like him but many of their insipid followers on the continent and off who were never trained to express themselves clearly and logically.

This is why all philosophers should be trained in the analytical tradition, even if they want to work in other areas.

Replies from: jimrandomh
comment by jimrandomh · 2009-05-18T01:51:42.068Z · LW(p) · GW(p)

There is a pretty innocent reason for why those passages look meaningless– they're all jargon filled when you don't know what the jargon means you will likely fail to understand what the passages mean.

No, the passages given in the article have much deeper problems than just the jargon. The jargon only serves to defend these texts from criticism; because they're difficult to understand, anyone who says that these passages are wrong or mere gibberish can be accused of not understanding them. This defense works even if the critic understands the text perfectly.

Replies from: Jack
comment by Jack · 2009-05-18T02:33:25.058Z · LW(p) · GW(p)

Uh, maybe. I'm willing to hear arguments to that effect. But you didn't give one.

I think Plotinus is definitely wrong, I don't know enough about Hegel to form an opinion, and I disagree with what I know of Foucault. But that doesn't make what they wrote meaningless.

Replies from: jimrandomh, jimrandomh
comment by jimrandomh · 2009-05-18T02:48:11.397Z · LW(p) · GW(p)

Arguments to what effect? Are you objecting to my claim that "you don't understand" is used inappropriately to defend bad philosophy, to the claim that jargon makes it easier to do so, or to my claim that the passages have deeper problems?

Replies from: Jack
comment by Jack · 2009-05-18T03:17:50.730Z · LW(p) · GW(p)

Sorry, I should be specific. I don't think the passages, or the writing of these philosophers and the well-know continental philosophers, generally, are gibberish. I think the reason people think they are gibberish is because of the jargon. I would like to see an argument for why I should consider them gibberish for reasons other than jargon I don't understand.

And since I hold that the jargon is meaningful, I don't think that the jargon "only" serves to defend the texts from criticism (did you really mean "only")? I also, deny that a critic who understands the text perfectly would argue that the text is meaningless– but that issue will be addressed by the argument I ask for above.

(Note: Of course there are deeper problems to these passages. But those problems don't have anything to do with the syntactic rules for sentence formation or semantic rules for word usage. In other words, the problem isn't that their gibberish.)

Replies from: jimrandomh
comment by jimrandomh · 2009-05-18T03:38:37.693Z · LW(p) · GW(p)

I define "gibberish" to mean "difficult to understand and entirely or almost entirely false or meaningless". Since you have said you think Plotinus and Foucault are wrong, and I think we can agree that they're at least somewhat obfuscated, then we must have different definitions. What's yours?

Replies from: Jack
comment by Jack · 2009-05-18T04:50:02.369Z · LW(p) · GW(p)

I define gibberish as "difficult to understand and entirely or almost entirely meaningless". I think Plotinus and Foucault are "difficult to understand and entirely or almost entirely false". A statement is meaningless if it either fails to follow rules of syntax, i.e. "Running the the snacks on quickly!" or semantics, i.e. "Green ideas sleep furiously."

The distinction is actually pretty important. If you know something is meaningless then you can move on, but you can't decide something is false without first considering the argument, obfuscated or not.

There is some middle ground when it comes to arguments about things that don't exist. The trinity argument (and probably Plotinus) appeals to something that doesn't exist and so it says things that would be meaningful if the holy trinity was real but can't really be evaluated since there is no such thing. Obviously there is no reason for you to care much about this argument. But I don't think Hegel, Foucault or Heidegger and the other usual suspects are talking about things that don't exist.

Replies from: saturn, cousin_it, jimrandomh
comment by saturn · 2009-05-18T20:50:13.820Z · LW(p) · GW(p)

Syntax does rules necessarily broken imply meaninglessness not.

Replies from: Jack
comment by Jack · 2009-05-19T01:24:29.392Z · LW(p) · GW(p)

Semantic rules aren't holding knives to the throat of meaning either.

So yeah, it is more complicated than what I said before because our brain is pretty good at fixing broken sentences with context. Rules for context and pragmatics should also be included in requirements of meaningfulness. My bad for missing that.

comment by cousin_it · 2009-05-18T14:31:34.169Z · LW(p) · GW(p)

The word "exist" confuses you. Does three exist? Maybe yes, maybe no; what real-world consequences would arise from three existing or not? If a tree falls in the forest, etc.

Humanity to date knows two families of statements that appear to possess truth values independent of the listener's psychology:

1) Experimental results, objectively verifiable by repeating the experiment.

2) Axiom-based mathematics, objectively verifiable e.g. by proof checking software.

Of course people can make personally or culturally meaningful statements that don't fall into type 1 or 2. Just don't delude yourself about their universal applicability or call them "science".

Replies from: Jack
comment by Jack · 2009-05-18T16:24:51.921Z · LW(p) · GW(p)

First, the word exist does not confuse me anymore than it confuses anyone else. If you think it does you should say why, since it wasn't explained in the previous post. The ontological status of numbers is a classic and ongoing philosophical dispute, whether there are real-world consequences to the question, I don' t know but even if there aren't it does not follow that the question has no truth value.

Experimental results don't verify anything, they either falsify or fail to falsify huge sets of different scientific propositions. When an experimental test of a hypothesis comes up false one can dismiss the hypothesis or one can dismiss any number of auxiliary assumptions that you had when you made your hypothesis. It is the job of scientists to find the best interpretation of experimental results according to criteria such as parsimony, consistency, usefulness, etc. But scientific theories are better understood as best working interpretations not objectively verified truths that exist independent of human interpretation. Metaphysics uses the exact same criteria to try and figure out the best interpretations with regard to other issues for which experiments are sometimes relevant but often not.

Also, axiom-based math can't really be addressed by proof checking software since you can't program proof-checking software before discovering some axiom based mathematics. Plus it isn't like we started believing math was true 60 years ago. We figured it out because our vulnerable, biased, human brains happen to have considerable abilities for ascertaining the truth.

Anyway, we also know things based on non-experimental observation and data gathering. This includes non-scientific things like whether or not there is a car on the street as well as the less experimental sciences like, astronomy, linguistics and economics. Knowledge in linguistics and economics is certainly somewhat more precarious than in physics since in the former fields it is by turns often impossible or unethical to run experiments. But that doesn't mean the insights in these fields aren't useful. I have no problem calling them sciences.

Of course there are the other so-called analytic truths- the whole set of possible tautologies one can make with natural language and entailment relations between categories. Altogether, I think there are quite a few more statements that possess truth values than just experimental science and axiomatic mathematics and they all involve human interpretation.

This isn't a reason to be frustrated, it just means we don't get to take an aerial picture of the terrain in making our map, we've got to figure it out by making best guesses according to limited information.

Finally, so what if some philosophy is simply personally and culturally meaningful statements? That isn't a reason to reject them as bad thinking.

Replies from: cousin_it
comment by cousin_it · 2009-05-18T23:13:43.588Z · LW(p) · GW(p)

You might have missed my emphasis on well-transferable truth value. Even if the "ontological status of numbers" question has a well-defined truth value to you, or non-experimental economics, or linguistics... how do you transfer the answer between individuals? I've indicated two methods of independent verification that correspond to science and math; is there a third one? Persuasive-sounding literature doesn't cut it, because it can be used for religion just as well. About your final question, what makes philosophy distinct from literature?

Replies from: Jack
comment by Jack · 2009-05-19T00:07:38.609Z · LW(p) · GW(p)

Fine. My point was that experimental results aren't perfectly transferable either. They require interpretation. Hypotheses are proven and disproven in the context of hundreds of other assumptions. We construct models and theories that we think best explain experimental data. Linguists and economists construct models and theories that best explain observed economic behavior and language use. Linguists and economists, granting some exceptions, agree with each other on lots of issues! There is not as much agreement as there is in physics since there is more room for biases and preconceptions when you can't repeat an experiment over and over again to prove someone wrong. But that doesn't mean linguists and economists aren't getting less and less wrong.

Philosophy is a broad subject. A lot of it (definitely most of it in the English speaking world) isn't just culturally meaningful but scientifically meaningful. In epistemology they try and form models and theories that best explain our understanding of what counts as knowledge. In ethics they do the same for our understanding of what counts as moral. In philosophy of science they try and explain our understanding of what counts as science. There are also fields such as philosophy of physics and philosophy of biology which address philosophical and interpretive issues within those fields like interpreting quantum mechanics (where the philosophers new the Copenhagen interpretation was bunk long before the physicists began realizing it) and the ontological status of biological categories like 'species'. There is Philosophy of Mind, Philosophy of Time, Philosophy of Mathematics etc. That doesn't even include the fields that were once part of philosophy but expanded to become separate subjects (logic, linguistics, cognitive science).There is even something called experimental philosophy where they, you know, experiment. I don't know what your experience with philosophy is. Maybe you just had someone make you read Plato.

Now it is true religions make metaphysical arguments. But I think you're wrong to say it can be used for religion just as well. The metaphysical arguments for God's existence fail. The vast majority of analytic philosophers (and even continental philosophers) are atheists.

But of course there is a whole continental tradition which might be better described as "culturally meaningful" or "politically meaningful". But even here there is an obvious sense in which culturally meaningful philosophy is distinct from fiction. Cultural philosophy makes arguments and descriptions which readers can dispute and challenge. You can say of a cultural philosopher that he does not have a complete grasp of our culture or that his response is misguided. Cultural philosophers hold debates. All the time. Novelists don't. Now, I admit that these debates are not often that valuable since if the two sides have a different understanding of culture they're going to talk past one another. Nonetheless, viewers still have grounds to adjudicate between them and often their thinking inspires political action.

Literature is stories. Stories that may have allegorical or instructional purposes but they're not in a form that one can contradict and that makes them particularly dangerous.

comment by jimrandomh · 2009-05-18T15:41:28.845Z · LW(p) · GW(p)

So you maintain that anything which follows a few syntactic and semantic laws cannot be gibberish? I disagree; text can have meaning and still be gibberish. Consider a sequence of words drawn uniformly at random from a dictionary, then slotted into a repeating template like (noun) (verb) (article) (adjective) (noun). The template ensures that no rules of syntax are violated. A few constraints on the vocabulary can ensure there are no egregious violations of semantic rules, like green ideas and furious sleeping. Restrict the vocabulary to a few hundred concrete words and you can even ensure that every sentence makes a testable prediction. But it's definitely gibberish.

Replies from: Jack
comment by Jack · 2009-05-18T16:56:52.807Z · LW(p) · GW(p)

Well there are a lot of semantic rules and plenty that we've haven't formalized. So I'm not convinced anyone now alive could write such a program. But I'm not a programmer so maybe someone has proved me wrong. However,iIf they were successful I don't think I would consider the result gibberish- especially if each sentence made a testable prediction. In this case wouldn't some of the predictions be true? If so then it is clear that your definition is not broad enough.

Thats troubling since I had already concluded your definition was too broad because it seemed to include important but complex and falsified scientific claims,

comment by jimrandomh · 2009-05-18T02:37:21.692Z · LW(p) · GW(p)

Uh, maybe. I'm willing to hear arguments to that effect. But you didn't give one.

If you narrow it down to one passage, I will. The reason I didn't before was because you referred to all the passages collectively, and I don't have time to analyze all of them.

comment by PhilGoetz · 2009-05-17T17:13:27.024Z · LW(p) · GW(p)

The history of philosophy can't really have been one of thousands of years of nearly unrelenting adoration of stupidity. What probably happened is that philosophers became popular only if their ideas were simple enough and appealing enough. There is a bandpass filter on philosophy, and it has both a low and a high cutoff.

We propagate knowledge by collective judgements about it. In fields where we can't eliminate bad ideas by experiment, both the very worst and the very best ideas must be rejected. The requirement that an influential philosopher appeal to a large group of philosophers guarantees that relatively simplistic, self-aggrandizing or at least inoffensive crap with enough fuzziness to give one leeway in how to interpret it will be favored over careful, complex, a-polite ideas.

I recently looked at a bunch of my grad-school AI textbooks. It made me ill to think how many years I wasted studying an entire discipline filled with almost nothing but knowledge that has so far proven useless to me across a wide range of problems and disciplines for anything other than writing computer games - and useful there only because you can scale the game down and restrict its environment until the techniques work. Is this a different way of going wrong than the philosophers, or is it the same thing? Many of the bad-old-fashioned-AI (BOFAI) way of doing things are quite difficult: You can't accuse Kripke or Quine of being simplistic.

I wonder if the internet can provide a way for thinkers of the highest quality to find each other, and pass on ideas to each other that would go over the head of the larger professional bodies. I wonder if these ideas would influence the world, or remain useless in the hands of their brilliant but uninfluential custodians.

However, my experience on LW has shown that the best and brightest people are still very bad at conveying even relatively simple ideas to each other.

I have also seen instances where nearly an entire field is making some elementary error, which people outside that field can see more clearly, but which they can't communicate to people in that field because they would have to spend years learning enough about the field to write a paper, probably with half a year's worth of experimental work, and not get rejected, even if their insight is something that could be communicated in a single sentence. I wish there were some Twitter version of Science, that published only pithy, insightful comments, unsubstantiated by experiment. But since I've also seen cases where researchers spent decades gathering data and publishing critiques in their field and getting no traction, this alone is not enough.

How can we use the internet to recognize good ideas and get them to the people who can use them? Cross-discipline reputation brokers could be part of the solution.

Replies from: jimrandomh, ChrisG, Richard_Kennaway, Douglas_Knight, CannibalSmith
comment by jimrandomh · 2009-05-17T20:34:24.238Z · LW(p) · GW(p)

What probably happened is that philosophers became popular only if their ideas were simple enough and appealing enough.

On the contrary, philosophers became popular only if their ideas were complicated enough to fill a book. The ideas that were simple enough to be true were also too short to publish.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-17T20:50:06.116Z · LW(p) · GW(p)

An interesting possibility. (Nitpick: "Simple enough to be true" implies that complex ideas can't be true. This is wrong.)

Can you give an example of a simple but non-obvious truth that was available but passed over in philosophy?

Replies from: AllanCrossman
comment by AllanCrossman · 2009-05-17T20:59:58.018Z · LW(p) · GW(p)

What do you mean by "available"?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-17T22:33:44.843Z · LW(p) · GW(p)

Eg., I'm not interested in hearing that medieval philosophers ignored the idea that the motion of the planets are governed by the same laws that govern the motion of bodies on earth.

Replies from: AllanCrossman
comment by AllanCrossman · 2009-05-18T12:01:02.225Z · LW(p) · GW(p)

So, are we looking for something which is:

  • Simple,
  • True,
  • Not obvious,
  • Was claimed as true by someone or other,
  • But mostly ignored?

Perhaps Aristarchus and his heliocentrism would fit the bill (while not strictly true, it was truer than the alternative).

comment by ChrisG · 2009-05-18T07:51:34.321Z · LW(p) · GW(p)

I have also seen instances where nearly an entire field is making some elementary error, which people outside that field can see more clearly, but which they can't communicate to people in that field because they would have to spend years learning >enough about the field to write a paper, probably with half a year's worth of experimental work, and not get rejected, even if their insight is something that could be communicated in a single sentence.

I for one would be interested in hearing these sentences, and also which fields you feel are being held back by simple errors of logic. The margins here are quite large ;).

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-18T23:00:24.903Z · LW(p) · GW(p)

Some examples off the top of my head:

Rodney Brooks and others published many papers in the 1980s on reactive robotics. (Yes, reactive robotics are useful for some tasks; but the claims being made around 1990 were that non-symbolic, non-representational AI was better than representational AI at just about everything and could now replace it.) Psychologists and linguists could immediately see that the reactive behavior literature was chock-full of all the same mistakes that were pointed out with behavioral psychology in the decade after 1956 (see eg. Noam Chomsky's article on Skinner's Verbal Behavior).

To be fair, I'll give an example involving Chomsky on the receiving end: Chomsky prominently and repeatedly claims that children are not exposed to enough language to get enough information to learn a grammar. This claim is the basis of an entire school of linguistic thought that says there must be a universal human grammar built into the human brain at birth. It is trivial to demonstrate that it is wrong, by taking a large grammar, such as one used by any NLP program (and, yes, they can handle most of the grammar of a 6-year-old), and computing the amount of information needed to specify that grammar; and also computing the amount of information present in, say, a book. Even before you adjust your estimate of the information needed to specify a grammar by dividing by the number of adequate, nearly-equivalent grammars (which reduces the information needed by orders of magnitude), you find you only need a few books-worth of information. But linguists don't know information theory very well.

Chomsky also claims that, based on the number of words children learn per day, they must be able to learn a word on a single exposure to it. This assumes that a child can work on only one word at a time, and not remember anything about any other words it hears until it learns that word. As far as I know, no linguist has yet noticed this assumption.

In the field of sciencology?, or whatever you call the people who try to scientify science (eg., "We must make science more efficient, and only spend money discovering those things that can be successfully utilized"), there was an influential paper in 1969 on Project Hindsight, which studied the major discoveries contributing to a large number of US weapons systems, and asked whether each discovery was done via basic research (often at a university), or by a DoD-directed applied R+D program specific to that weapon system. They found that most of the contributions, numerically, came from applied engineering specific to that weapon system. They concluded that basic research is basically a waste of money and should not have its funding increased anymore. Congress has followed their advice since then. They ignored 2 factors: 1) According to their own statistics, universities accounted for 12% of the discoveries, but only 1% of the cost. This by itself shows basic research to be more cost-effective than applied research. 2) They did not factor in the fact that the results of each basic research project were applied to many different engineering projects; but the results of each applied project were often applied only to one project.

NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they're still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.

Replies from: PhilGoetz, Nick_Tarleton, Douglas_Knight, steven0461
comment by PhilGoetz · 2009-05-19T00:54:20.257Z · LW(p) · GW(p)

Though you also see cases where people from the outside do get their message across, repeatedly, and fail to make an impact. Something more is going wrong then.

The FDA, in its decision whether to allow a drug on the market, doesn't do an expected-value computation. They would much rather avoid one person dying from a reaction than save one person's life. They know this. It's been pointed out many times, sometimes by people in the FDA. Yet nothing changes.

EDIT: Probably a bad example. The FDA's motivational structure is usually claimed to be the cause of this.

Maybe when one particular stupidity thrives in a field, it's because it's a really robust meme for reasons other than accuracy. There are false memes that can't be killed, because they're so appealing to some people. For example, "Al Gore said he invented the Internet" - a lie repeated 3 times by Wired that simply can't be killed, because Republicans love it. "You only use 1/10th of your brain" - people love to imagine they have tremendous untapped potential. "Einstein was bad at math" - reassures people that being good at math isn't important for physics, so it's probably not important for much.

So, for example, NASA keeps trying to get ET's attention, not because it's rational, but because they read too many 1950s science fiction novels. The people behind project Hindsight and Factors in the Transfer of Technology wanted to conclude that basic research was ineffective, because they were all about making research efficient and productive, and undirected exploratory research was the enemy of everything they stood for. Saying that humans have a universal grammar is a reassuring story about the unity of humanity, and also about how special and different humans are. And the FDA doesn't picture themselves as bureaucrats optimizing expected outcome; they picture themselves as knights in armor defending Americans from menacing drugs.

comment by Nick_Tarleton · 2009-05-19T04:49:16.009Z · LW(p) · GW(p)

This, and your comment below, should be top-level posts IMO.

comment by Douglas_Knight · 2009-05-19T15:05:00.687Z · LW(p) · GW(p)

These are interesting examples, but they're not what I envisioned from your original comment. (The Brooks example might be, but it's the vaguest.)

A problem is that people gain status in high-level fights, so there is a lot of screening of who is allowed to make them. But the screening is pretty lousy and, I think, most high-level fights are fake. Are Chomsky's followers so different from other linguists? Similarly, Brooks may have been full of bluster for status reasons that were not going to affect how the actual robots. It may be hard for outsiders to tell what's really going on. But the bluster may have tricked insiders, too.

Also, "You don't understand information theory," while one sentence, is not a very effective one.

comment by steven0461 · 2009-05-19T10:57:02.428Z · LW(p) · GW(p)

NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they're still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.

People are still doing it, not NASA though. Their rationalizations can get pretty funny. It seems stupid but rather harmless; it's hard to find a set of assumptions under which there's a nontrivial probability that it matters.

comment by Richard_Kennaway · 2009-05-19T14:28:00.744Z · LW(p) · GW(p)

I wonder if the internet can provide a way for thinkers of the highest quality to find each other, and pass on ideas to each other that would go over the head of the larger professional bodies. I wonder if these ideas would influence the world, or remain useless in the hands of their brilliant but uninfluential custodians.

TED.

comment by Douglas_Knight · 2009-05-18T02:43:57.231Z · LW(p) · GW(p)

I have also seen instances where nearly an entire field is making some elementary error, which people outside that field can see more clearly, but which they can't communicate to people in that field because they would have to spend years learning enough about the field to write a paper, probably with half a year's worth of experimental work, and not get rejected, even if their insight is something that could be communicated in a single sentence.

I think that you're saying that the outsiders can't be published without learning the jargon and doing experiments. But publication is not the only avenue. If it really only takes a single sentence, the outsider should be able to find an insider who will look past jargon and data and listen to the sentence. Then the insider can tell other insiders, or tack it onto a publication, or do the new experiments.

If jargon is not just a barrier to publication, but also to communication it's a lot harder to find a sympathetic insider, but it hardly seems impossible. Also, in that situation, how can outsiders be sure they understand?

These situations sound like there is a much bigger problem than the elementary error, perhaps that the people involved just don't care about seeking truth, only about having a routine.

Replies from: MrShaggy
comment by MrShaggy · 2009-05-18T21:55:56.409Z · LW(p) · GW(p)

"These situations sound like there is a much bigger problem than the elementary error, perhaps that the people involved just don't care about seeking truth, only about having a routine."

Well, a large part of it is funding/bureaucracy/grants. I tend to thing that's the main part in many of these fields. Look at Taubes's Good Calories, Bad Calories for a largely correct history of how the field of nutrition went wrong and is still going at it pretty badly. You do have a growing number of insiders doing research not on the "wrong" path and you did all along, but they never got strong enough to challenge the "consensus" and it's due not just to the field but the forces outside the field (think tanks, government agencies, media reports). So even being published and well-known isn't enough to change a field.

comment by CannibalSmith · 2009-05-17T19:18:38.596Z · LW(p) · GW(p)

[..] has so far proven useless to me [..]

It's just you.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-17T20:41:52.732Z · LW(p) · GW(p)

I was talking about the content of artificial intelligence books published in the 1980s. None of the examples you gave involved anything from the BOFAI school of artificial intelligence; nothing that would have been in those books.

comment by ShardPhoenix · 2009-05-18T04:17:57.267Z · LW(p) · GW(p)

While I mostly agree with the article, I don't think the Foucalt example given at the start is entirely bad - it just seems like a long-winded warning against confusing the map with the territory (or more specifically against trying to hammer a square territory into a pre-conceived round map).

comment by Annoyance · 2009-05-17T18:28:59.958Z · LW(p) · GW(p)

The history of philosophy can't really have been one of thousands of years of nearly unrelenting adoration of stupidity.

I often see statements like that. "This couldn't possibly be the case", "that can't really happen", etc.

The first question we should ask ourselves when we see such statements: Why?

Usually, the person speaking is dismissing possibilities and potentialities out of hand for one of a variety of reasons, rather than having a valid and justifiable reason for discarding the contingency.

And even when there are good reasons, it's important to remember that we can always be wrong. Conservation of mass-energy is an incredibly useful and extraordinarily broad-in-application principle, and showing that a proposed idea in physics or engineering violates it is a powerful critique, but it's possible that it's not really the case.

comment by hrishimittal · 2009-05-17T12:36:09.036Z · LW(p) · GW(p)

Genetic engineering aside, given a large aggregation of human beings, and a long time, you cannot reasonably expect rational thought to win. You could as reasonably expect a thousand unbiased dice, all tossed at once, all to come down 'five,' say. There are simply far too many ways, and easy ways, in which human thought can go wrong. Or, put it the other way round: anthropocentrism cannot lose.

That's the same argument against rationalist winning that has been seen many times on LW. However, it is based on hopelessness and fear, rather than on knowledge of even a single failure of an organised attempt at large-scale rational winning. So, while Stove recognises the obviously wrong thoughts of philosophers, he himself goes wrong in thinking the above by making a wrong probability estimate.

So just to be clear, we are saying that the probability of a significant number of people turning to rational thinking is greater than the probability of winning a lottery, right?

Replies from: truename
comment by truename · 2009-05-18T00:50:53.180Z · LW(p) · GW(p)

Hi, "first time, long time." :->

The way I read that, I thought he was talking about even larger, longer term societal structures. Like, imagine many generations of atheist eudaimonia that doesn't collapse on itself -- creating ridiculous new philosophy-religions, over generations.

Whether a society of atheists could endure, was a question often discussed during the Enlightenment, though never decided. If the question is generalized a little, however, from 'atheists' to 'Positivists,' then it seems obvious enough that the answer to it is 'no.'

The author's future history seems to involve static human nature at play for a long, long time.

Kant and Hegel, or some other equally 'great thinkers', will still be read with reverence by the most intelligent and educated part of mankind, long after modern science is forgotten, or is confined to a few secret departments of the bureaucracy.

Someone needs to give this guy a hug. Or, even better, a copy of "Engines of Creation".

And, according to wikipedia, he died in 1994...

comment by cousin_it · 2009-05-17T12:16:50.894Z · LW(p) · GW(p)

I've long loved this piece, but today would file most of its examples simply under "getting carried away".

Items on the list that reminded me of Eliezer's writings: #19, #22, #32, #35. Indictment not intended.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-17T21:48:23.104Z · LW(p) · GW(p)

Sir, you can bet that if I ever used a phrase like "transcendental unity" I would be able to tell you exactly what it meant. It would probably have math in back of it. Because if I wanted to talk about something I couldn't really define, I would call it something much less impressive and as close to ordinary English as possible, for fear of being called on it.

The above is not applicable to Eliezer_2003 and below.

comment by phane · 2009-05-17T11:56:17.732Z · LW(p) · GW(p)

I don't like this paper. It's wholly scathing for no reason other than to justify ignoring all of philosophy. Some philosophy is valuable and some is not, and of his 40 statements about three, I'd say 6 of them are claims I would take seriously and would hear arguments for, were I interested in the nature of three.

Generally, continental philosophy is trash, but I wouldn't throw out the baby with the bathwater.

Replies from: PhilGoetz, Richard_Kennaway
comment by PhilGoetz · 2009-05-17T17:54:12.880Z · LW(p) · GW(p)

Analytical philosophy is a quest for truth; continental philosophy is a way to get laid. (I hear it works better in France.)

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2009-05-17T19:27:24.374Z · LW(p) · GW(p)

But it bears noting explicitly that many of his examples represent positions from within analytic philosophy. For example, "23 The proposition that 3 is the fifth root of 243 is a tautology, just like 'An oculist is an eye-doctor.'"

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-17T21:51:14.338Z · LW(p) · GW(p)

I'd agree with that, actually, I'd just note that tautologies have to be empirically observed somehow and also that the case of the oculist and the eye-doctor is nowhere near as clear-cut.

Replies from: Jack
comment by Jack · 2009-05-18T01:09:33.188Z · LW(p) · GW(p)

The piece about tautologies having to be empirically observed is one of the most bizarre posts I've ever read by you. It is so strange that I'm not really sure if there is anything I can say that would change you mind if you really think you could be convinced that 2+2=3 in that way. I can't even tell where you went wrong. Do you also hold that that the identity relation has to be empirically observed? Could you be convinced that 4=3? That 3 doesn't = 3? Do you believe you could be convinced that triangles on Euclidean planes are round? Do you not trust modus ponens and modus tollens? How does one even empirically observe tautologies in symbolic logic?

Replies from: JGWeissman, komponisto, randallsquared
comment by JGWeissman · 2009-05-18T21:47:36.397Z · LW(p) · GW(p)

That 2+2=4 is a fact about a mathematical system that exists independently of the physical universe, including us humans that decided to use those symbols to express that fact. That fact is in the territory. But, in order to interact with the physical universe, it has to be discovered by some physical system that explores logical conclusions, such as our brains. This exploration builds our map of the territory. Our uncertainty about the tautological statement does not reflect some vagueness in the territory of logic, but our uncertainty about the workings of our physical brains, and their ability to build maps that reflect the territory.

Problems of logic have 100% correct answers, but our physical brains cannot become 100% entangled with those correct answers. It is observation, which can be abstract observations of our own logical reasoning, which give us increasing entanglement which approaches, but never reaches, 100%.

Replies from: Vladimir_Nesov, Jack
comment by Vladimir_Nesov · 2009-05-18T23:03:05.160Z · LW(p) · GW(p)

Whatever you could possibly know and value about reality can only exist independently of the physical universe. (Huh?) If your uncertainty about math doesn't indicate uncertainty of the math, and it's an argument for math being otherworldly, it's also an argument for the territory being otherworldly, which is clearly a confusion of terms.

And so you should bring the math back where it belongs, an aspect of the territory.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-18T23:21:36.059Z · LW(p) · GW(p)

Whatever you could possibly know and value about reality can only exist independently of the physical universe.

That is not what I am saying. I mean that things that we think of as tautologies, or purely logical truths, which are true no matter what universe we are in, exist independently of the physical universe. Facts about the physical universe are not in this class. Indeed, the entanglement of our physical brains with these logical truths is an example of a fact about the physical universe that, of course, depends on the the universe.

If your uncertainty about math doesn't indicate uncertainty of the math, and it's an argument for math being otherworldly...

You have my argument backwards. I first make the point that facts about math are not facts about the physical universe to support that the uncertainty we have about math, which exists in our heads, in our physical universe, does not exist in math itself. The argument does not work the other way, there are plenty of instances of uncertainty in our minds that are not uncertainty in the things elsewhere in the physical universe that they are about.

My comment was an attempt to explain why we need observation to believe things that are objectively true regardless of the world we exists in. Basically, we need evidence that our brains, existing in the physical worlds, are suitable for representing the logical truths.

comment by Jack · 2009-05-19T00:16:55.969Z · LW(p) · GW(p)

This is really helpful and I think I agree with all of it. I've just never understood "observation" to include my logical reasoning. If your position is that we know 2+2=4 by virtue of observing our own reasoning and not by virtue of any sensory data (information about the outside world) then I don't think that position is any different from the one I already hold. But is this Eliezer's position? His OB post made it sound like he could be swayed to think 2+2=3 as a result of external events mediated by his sensory perception of those events. That is what I objected to.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-19T05:24:32.238Z · LW(p) · GW(p)

Well, I think that observations can be both our reasoning and sensory data.

Suppose you have a model* of your own accuracy at addition of integers, which is that you are 95% likely to get the correct answer, 2% to be one high, 2% to be one low, and with the remaining 1% divided somehow amongst other possibilities. Then, when you actually observe that when adding 2 + 2 you get 4, this is Bayesian evidence that gives a likelihood ratio of 42.5 : 1 in favor of the theory that 2 + 2 = 4 compared to the theory that 2 + 2 = 3.

Now suppose you have a collection of pebbles, and your model of the pebbles claims that if you count out 2 distinct collections of pebbles, and then combine them and count the total, that the sum of the counts of the distinct collections is 90% likely to be the count of the combined collection, and is 4% likely to be one high, 4% to be one low, and 2% to be something else. And then you actually count out a collection of 2 pebbles, and another collection of 2 pebbles, and combine them, and when you count the combined collection you count 4 pebbles. This is Bayesian evidence with a likelihood ratio of 22.5 : 1 in favor of 2 + 2 = 4 as opposed to 2 + 2 = 3.

In both cases, belief in a logical proposition results from our belief that an observable system has some probability of reflecting logical truth. If, as in the example numbers that I made up just now, we believe that our reasoning process is more likely than observations of our environment, then the results of our reasoning is stronger evidence, but it is still the same class of evidence.

* I have neglected the harder problem of simultaneously updating propositions about additions and propositions about a given system's probability of representing addition. That is, I have not explained where the models I asked you suppose you have really should come from.

comment by komponisto · 2009-05-18T02:26:24.919Z · LW(p) · GW(p)

It may be worth noting that Quine had a view similar to Eliezer's -- which Stove alludes to (dismissively) in the essay.

Replies from: Jack
comment by Jack · 2009-05-18T02:56:31.702Z · LW(p) · GW(p)

Thanks. That is worth noting. My recollection is that Quine denies the existence of analytic statements but doesn't go as far as to hold that tautological statements are just like regular empirical statements. Logical truths still have some kind of special status for Quine. Plus, I think his reasons for denying analytic truths had very little to do with actually being able to imagine a series of experiences that could change his mind about them– it is one thing to claim that such experiences are possible. Its another thing to claim you have just described that set of experiences.

Finally, I remember thinking Quine was being silly, but it has been a while so I'm going to go read and come back.

comment by randallsquared · 2009-05-18T16:05:29.459Z · LW(p) · GW(p)

I don't think I'd read Eliezer's piece about tautologies having to be observed before, but it matches my pre-existing beliefs about its topic, and it seems so obvious that I'm left wondering how you think you got the understanding that 2+2=4, or that triangles on Euclidean planes are not round. Given that you got that understanding somehow, couldn't the same process give you the new understanding, assuming (for this argument) it was true?

Replies from: Jack
comment by Jack · 2009-05-18T17:09:30.627Z · LW(p) · GW(p)

This is certainly a strange divergence of intuitions. I think the story of how I came to know 2+2=4 goes like this: Someone taught me that 2 meant -oo- and 4 meant -oooo-. Then someone probably be told me that 2+2=4 but I don't think they would have needed to. I think I could easily have come to the conclusion myself since given -oo- and -oo- I can count four dots. If pushing four objects together meant one of the objects disappeared I would probably just stop pushing objects together and count in my head. If counting the objects made one of them disappear I would be pretty damn frustrated but I'm pretty confident I could realize that reality was changing as a result of a mental operation and not that I was counting wrong. Aside from being tortured with rats or Cardassian pain sticks I don't see what would make me think that 2+2 didn't =4.

I'm not sure how to explain my thinking any better except to say that it is the same thinking that lead generations of philosophers and mathematicians to conclude that mathematical knowledge was a different kind of knowledge than knowledge of our surrounding and the natural world. My reason is the reason Kant distinguished the analytic from the synthetic- a sense that a rational mind could figure these things out without sensory input.

Replies from: orthonormal, byrnema
comment by orthonormal · 2009-05-18T18:29:42.470Z · LW(p) · GW(p)

The trouble there is the claim of a rational mind, in my opinion. It's not logically necessary that our evolved brains, hacked by culture, are going to mirror reality in their most basic perceptions and intuitions.

The space of all possible minds includes some which have a notion of number and counting and an intuitive mental arithmetic, but for which 2 and 2 really do seem to make 3 when they think of it. These minds, of course, would notice empirical contradictions everywhere: they would put two objects together with two more, count them, and count four instead of three, when it's obvious by visualizing in their heads that two and two make three instead. Eventually, a sufficiently reflective mind of this type would entertain the possibility that maybe two and two do actually make four, and that its system of visualization and mental arithmetic are in fact wrong, as obvious as they seem from the inside. Switching "three" and "four" in this paragraph just illustrates how difficult accepting that hypothesis might actually be for such a mind.

The thing is, we ourselves are in this situation, not with arithmetic (fortunately, we receive constant empirical reinforcement that 2+2=4 and that our mental faculties for arithmetic work properly) but with our biases of thought. Things like our preferences and valuations seem to be rational and coherent, in that we can usually defend them all with arguments that look solid and persuasive to us. But occasionally this fiction becomes untenable, as when we are shown to have circular preferences in situations of risk and reward. As Eliezer put it in Zut Allais:

You want to scream, "Just give up already! Intuition isn't always right!"

Or, in this case, "Don't start by assuming that our minds work rationally whenever they see something as obvious! If this is true, it is an empirical fact; and you should be able to see the alternative as possible!"

Replies from: byrnema
comment by byrnema · 2009-05-18T21:21:32.443Z · LW(p) · GW(p)

If this is true, it is an empirical fact; and you should be able to see the alternative as possible!

Indeed, 2+2=4 is only true in some contexts. For example, sometimes 1+1=1 -- in contexts where separate objects lose their distinct identity as soon as they are grouped. (Think of a particular object several times. How many times did you think of it? But how many objects did you think of?)

Later edit: It is interesting that such a benign comment would get 4 down votes. Perhaps I understand this group well enough to guess why: the experiment I suggested is an entirely "internal" one, it provides no external proof of what I am suggesting. I think that a common reader here feels dismissive of, if not entirely antagonistically towards, knowledge that is internally generated. Personally, I have a preference for the knowledge that arises from internal experience.

Replies from: komponisto, Alicorn, steven0461
comment by komponisto · 2009-05-19T03:46:46.175Z · LW(p) · GW(p)

I agree that the downvoting of this comment was overly harsh. My theory on why it occurred is different, and best illustrated by an example: if someone posted a comment saying "2+2=4 is only true in some contexts; in arithmetic modulo 3, 2+2=1", that comment would have been similarly downvoted.

However, let me be so bold as to say a word in defense of even that hypothetical commenter. Anyone mathematically sophisticated (including our downvoters) will agree that it is possible to construct a mathematical system in which 2+2 equals anything you like -- or, more precisely, for any symbol x, a system can be constructed in which the formula (string of symbols) "2+2 = x" is given the label "TRUE". Mod 3 arithmetic is an example for x = "1".

Now, it is at this point that the downvoters protest: "But this is not the same thing as saying 2+2=1! All you've done is change the meaning of the symbols in the formula, such as '2' and '1'. Two plus two is still four, for the original meaning of those words. You're confusing the map and the territory. Downvoted!"

Well, the downvoters do have a point. But, at the same time, let me suggest that they're also making the same mistake as our poor beleaguered commenter!

What they've done, you see, is to make a leap from "Ordinary (i.e. non mod-3, etc.) Arithmetic accurately models certain physical phenomena" to something like "Ordinary Arithmetic is true in (or of) the physical world". Instead of saying what they mean, which is "the physical world is best modeled by a system that has '2+2=4' as a 'TRUE' formula", they say "2+2 is in fact equal to 4".

Small wonder that confusion arises about whether mathematical statements are "emprical" or not! "The physical world is best modeled by a system that has '2+2=4' as a 'TRUE' formula" is clearly an empirical claim. But what about 2+2 = 4, all by itself? When a mathematician at a blackboard proves that 2+2=4 in Ordinary Arithmetic (or, for Eliezer's benefit, that infinite sets exist in standard set theory), has he or she made a claim about physics? No! Not without the additional assumption that the formal system being used is in fact an accurate map of the territory! But the mathematician makes no such assumption; he or she (acting as a mathematician) is interested only in the properties of formal systems. (Yes, that's right: I'm advocating the view known as formalism here. The other well-known positions in the philosophy of mathematics, namely Platonism and intuitionism, suffer from map-territory confusion!)

Mathematical systems, like Ordinary Arithmetic or Mod-3 Arithmetic, are part of the map, not the territory. The facts of mathematics are, so to speak, cartographic, rather than geographic.

Replies from: byrnema
comment by byrnema · 2009-05-19T05:49:47.517Z · LW(p) · GW(p)

In the OB post tautologies have to be empirically observed somehow, Eliezer writes about waking up one day and discovering all sorts of evidence that 2+2=3. This wouldn't be evidence that 2+2=3 in Peano arithmetic, it would be evidence that Peano arithmetic just doesn't apply for some reason. In my down-voted comment, I was just giving an example of how there can be different kinds of arithmetic if you are willing to be flexible about what arithmetic is. (If you are not willing to be flexible, then you are not willing to allow the observation that 2+2=3 as an observation about arithmetic, because this is not possibly true in standard arithmetic. Well, the observations are possible but you'd have to account for it as some kind of grand delusion.) My point is that 2+2=4 in Peano Arithmetic independent of observation, but observation tells you if Peano arithmetic applies or not.

Replies from: komponisto
comment by komponisto · 2009-05-19T07:30:00.100Z · LW(p) · GW(p)

This wouldn't be evidence that 2+2=3 in Peano arithmetic, it would be evidence that Peano arithmetic just doesn't apply for some reason.

Exactly.

My point is that 2+2=4 in Peano Arithmetic independent of observation, but observation tells you if Peano arithmetic applies or not.

It is worth emphasizing that to claim that "2+2=4 in Peano Arithmetic independent of observation" is not to claim that our knowledge of this fact about Peano Arithmetic is independent of observation. (The former claim is about our map of the territory; the latter is about our map of our map of the territory.)

Replies from: Nick_Tarleton, byrnema
comment by Nick_Tarleton · 2009-05-19T08:22:20.043Z · LW(p) · GW(p)

It is worth emphasizing that to claim that "2+2=4 in Peano Arithmetic independent of observation" is not to claim that our knowledge of this fact about Peano Arithmetic is independent of observation. (The former claim is about our map of the territory; the latter is about our map of our map of the territory.)

Could you elaborate? It sounds to me like the former claim is about the territory, and the latter is just hard for me to parse.

comment by byrnema · 2009-05-19T15:52:59.303Z · LW(p) · GW(p)

It is worth emphasizing ...

I'll emphasize with the following analogy: you need to observe the sun to know of it. However, you can nevertheless be certain -- as certain as you are of anything at all -- that the sun exists independently of observation. You need to define the Peano axioms and observe the deductions that lead to the tautologies to know of them, but they are mathematically true independent of your observation.

comment by Alicorn · 2009-05-18T21:56:14.394Z · LW(p) · GW(p)

How many words are in this list?

  • Duck
  • Duck
  • Goose
Replies from: JGWeissman
comment by JGWeissman · 2009-05-18T22:07:38.870Z · LW(p) · GW(p)

The list contains 3 word instances and represents 2 word prototypes.

Replies from: Alicorn
comment by Alicorn · 2009-05-18T23:49:28.391Z · LW(p) · GW(p)

The jargon I was looking for was 3 tokens of 2 types, but close enough :)

Replies from: JGWeissman
comment by JGWeissman · 2009-05-19T19:36:32.504Z · LW(p) · GW(p)

Heh, with the right jargon, you can accomplish anything. ;)

comment by steven0461 · 2009-05-18T21:28:17.347Z · LW(p) · GW(p)

Doesn't that just mean that grouping doesn't always correspond to addition?

Replies from: byrnema
comment by byrnema · 2009-05-18T23:09:39.486Z · LW(p) · GW(p)

Yes. If 1+1 is anything other than 2, then it's not addition. I gave an example of how combining two things together doesn't always yield addition.

comment by byrnema · 2009-05-18T21:17:12.909Z · LW(p) · GW(p)

Saying that 2+2=4 is a tautology in a certain axiomatic system defined with '+' means that you couldn't have anything but 2+2=4 in that system. It's simply mandatory, and a rational person could not wake up one day and be convinced that 2+2=3 within a self-consistent system that deduces 2+2=4.

While tautological truth is independent of observation (let's call it mathematical truth), it is dependent upon context (i.e., a self-consistent axiomatic system). Some mathematical truths in one axiomatic system are false in another. When we talk about whether a a mathematical statement is true, we need to specify the context, and, in my opinion, in the most demanding definition of truth, the context is the real, actual, empirical world. So I agree with Eliezer that a mathematical tautology must be observed in order to be true.

When we humans talk about "2+2=4", it is because we have chosen arithmetic from an infinite number of possible axiomatic systems and given it a name and a set of agreed-upon symbols. Why did we do that? Because we observed arithmetic empirically. Obviously, addition is just one operation of infinitely many operations. The ones we have defined (multiplication, subtraction, addition mod n, taking the cardinality of subsets of, etc.) usually have some empirical relevance. While we don't feel very comfortable thinking of those that don't (and this says somethng about the way we think), I have faith that if we were presented with a very strange set of observations, it would take a pretty short amount of time to train ourselves to think of the new operation as a "natural" one.

... I idly wonder if there is a such thing as a mathematical truth that could not be realized empirically, in any context, and if there would be any way of deducing it's non-feasability.

Replies from: Jack
comment by Jack · 2009-05-19T01:04:37.973Z · LW(p) · GW(p)

Is saying "we could have a different axiomatic system" different from saying "2, 4, +, and = could all mean different things? Of course we've only defined the operations and terms that are useful to us. I don't care about the naturalness of '+' only that once I know the meaning of the operations and terms the answer is obvious and indisputable.

Math isn't my field, so my all means show me how I'm wrong.

comment by Richard_Kennaway · 2009-05-18T11:52:26.869Z · LW(p) · GW(p)

Some philosophy is valuable and some is not

Can you give some examples of valuable philosophy, and why you judge it valuable? I incline to the view that ignoring all of philosophy is, to a first approximation, the right thing to do, and that there are very few exceptions worth making.

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2009-05-18T14:01:08.517Z · LW(p) · GW(p)

Off the top of my head: Karl Popper, due to his influence on the scientific method. (Perhaps it's not as valuable today as it was back then, due to bayesianism, but still.)

Edit: Also, epistemology is a branch of philosophy.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-05-18T15:39:54.199Z · LW(p) · GW(p)

Yes, historically, Popper performed a valuable service, by showing (imperfectly) what distinguishes science from nonsense. (Stove characterises Popper as someone who overreacted to the fact that scientists sometimes make mistakes, but that is less than his due.)

But it's interesting that one can say that about Popper, and a few other philosophers -- that they were at least partly right, and where they were wrong, they were at least wrong, rather than "not even wrong". They created something to be corrected and improved on, not trash to be thrown out.

A colleague in theoretical computer science once showed me a Ph.D. thesis that a logician of his acquaintance had sent him. He found it rather strange in form, compared with the sort of mathematical thesis he was accustomed to reading. I looked at it and laughed. It followed precisely the standard form for a thesis in philosophy. (I think Pirsig describes this in "Zen and the Art...") In chapter 1, the author states the subject he is going to address. In chapters 2 to 8 he writes a detailed history of everything of significance that has ever been written on the subject. In chapter 9 he introduces his own modest contribution, and in chapters 10 to 12 indicates how it relates to the history. A scientific thesis, on the other hand, begins with a similar chapter 1, surveys the previous literature in chapter 2, going back only far enough to establish the context for his work, and the remainder is all about the author's own work.

No subject is worth anything whose entry qualification is a thesis of the first form. It would be interesting to write press-release style digests of current papers in philosophy, summarising their findings in bite-sized chunks:

  • What we studied.

  • What we discovered.

  • How we discovered it.

  • Why it matters.

I don't think it could be done other than as a work of satire. Maybe it should be. Any iconoclastic grad students in philosophy want to give it a go?

comment by pdf23ds · 2009-09-18T21:18:12.522Z · LW(p) · GW(p)

Your link is now broken. Is there some other web archive of the chapter? I've saved a copy from the google cache, in case it matters to anyone.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-09-18T21:26:15.642Z · LW(p) · GW(p)

The Internet Archive has a copy.

comment by nazgulnarsil · 2009-05-17T13:48:03.778Z · LW(p) · GW(p)

does #23 have a quick explanation or does it require a serious delve into abstract math?

Replies from: cousin_it
comment by cousin_it · 2009-05-17T14:29:17.114Z · LW(p) · GW(p)

One is a provable theorem in an axiomatic system, the other isn't.

comment by Drahflow · 2009-05-17T21:10:19.858Z · LW(p) · GW(p)

Regarding most of the lengthy examples of "philosophy" given by Stove:

Reading a text takes time, time can be spent acquiring utilions. Hence reading a text is only worth if the expected utilion win due to additional knowledge is grater than the expected utilions when using the time differently. This approach kills most of his examples dead in their tracks for me. This also implies positivism, if a text does not either generate utilions directly, i.e. fun reading fiction, then it needs to provide knowledge (in form of testable statements about the world), otherwise, how would I generate utilions from the "knowledge"?

Possibly, some thoughts are only valuable when more efficient methods of communication become available.

comment by kim0 · 2009-05-17T21:14:28.100Z · LW(p) · GW(p)

Interesting, but too verbose.

The author is clearly not aware of the value of the K.I.S.S. principle, or Ockhams razor, in this context.

comment by CannibalSmith · 2009-05-17T19:25:51.013Z · LW(p) · GW(p)

David Stove's "What Is Wrong With Our Thoughts" is a critique of philosophy that I can only call TL;DR.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-17T20:47:33.295Z · LW(p) · GW(p)

Voted down for being deliberately obscure. TL;DR?

Replies from: Nominull, mattnewport
comment by Nominull · 2009-05-17T20:54:39.951Z · LW(p) · GW(p)

Voting something down as deliberately obscure because you don't understand it strikes me as an egregious case of the mind projection fallacy.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-18T22:33:30.286Z · LW(p) · GW(p)

If you can take the time to write a comment, but conceal all of its meanings in an acronym, with no hint from context as to the meaning of the acronym, then you deserve to be voted down, because you save 2 seconds of your time but waste many seconds of the time of everyone who reads it and doesn't know what the acronym means.

We need a catchy name for the fallacy of being over-eager to accuse people of fallacies that you have catchy names for.

(I could be wrong about TL;DR. It's a judgement call as to whether your readers should know an acronym or not. I'd expect people here to know what LW or OB mean.)

Replies from: JGWeissman, John_Maxwell_IV, Nominull
comment by JGWeissman · 2009-05-18T22:38:39.972Z · LW(p) · GW(p)

We need a catchy name for the fallacy of being over-eager to accuse people of fallacies that you have catchy names for.

How about "Catchy Fallacy Name Fallacy"?

And I agree that it applies here.

comment by John_Maxwell (John_Maxwell_IV) · 2009-05-22T19:41:54.076Z · LW(p) · GW(p)

waste many seconds of the time of everyone who reads it and doesn't know what the acronym means.

Use urban dictionary.

Replies from: hirvinen
comment by hirvinen · 2009-05-28T10:45:56.001Z · LW(p) · GW(p)

Thanks to fast internet connections, good web search and online dictionaries, failing to expand an acronym only increases the cost from 5 seconds to 5 seconds per reader...

comment by Nominull · 2009-05-18T23:10:29.985Z · LW(p) · GW(p)

lol

comment by mattnewport · 2009-05-17T20:52:36.748Z · LW(p) · GW(p)

Too long; didn't read.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-18T22:34:13.577Z · LW(p) · GW(p)

Thank you.

comment by Nominull · 2009-05-17T16:14:05.037Z · LW(p) · GW(p)

(It is a very striking fact, however, that I had to go to translations for my three quotations above. Nothing which was ever expressed originally in the English language resembles, except in the most distant way, the thought of Plotinus, or Hegel, or Foucault. I take this to be enormously to the credit of our language.)

How racist and ignorant is this: very racist and ignorant, or super racist and ignorant?

Replies from: cousin_it, thomblake, PhilGoetz
comment by cousin_it · 2009-05-17T16:47:18.506Z · LW(p) · GW(p)

My native language is Russian, yet I find it more comfortable to use English almost exclusively when talking about programming, math or rationality. Case in point: my email conversation with Vladimir Nesov spontaneously switched to English after the first couple emails, even though we're both Russian.

In my impression people born into different languages often do narrate the world differently, at least at the emotional level, and some of those ways can be better than others for certain topics - e.g. less likely to lead you astray with the persuasive/musical component of words. Another example would be the complex, massively recursive sentences common to German philosophy, obviously made possible by regularities and trends of language.

Sorry, I'm downvoting you. You should have argued your point instead of name-calling.

Replies from: Nominull, kim0
comment by Nominull · 2009-05-17T17:20:20.668Z · LW(p) · GW(p)

It seems much more likely that the difference in quality between Anglo and Continental philosophy, to the extent that the difference exists, is based on chaotic factors. One (relatively) good philosopher randomly happens to be born in England and then he teaches his students (relatively) good philosophy, and they teach their students, and a whole tradition is born by chance. In continental Europe, that single die roll comes up the other way, and a seductive but (relatively) sinful philosophy is born. This story strikes me as more likely than that English is such a wondrous language out of all the various languages of the world that it alone confers resistance to philosophical sin. I hadn't thought I needed to make an argument for this, I didn't expect the posters here to take the strong Sapir-Whorf hypothesis seriously. That was my mistake, I see.

As for why you find it more comfortable to use English when talking about technical subjects, I would guess that it is because English has a particularly rich technical vocabulary, due to English-speakers recently leading the way in technical progress. I don't speak Russian so I don't know how it works there, but I know that many languages use loan words from English heavily for technical subjects. One might be able to argue that this is a deficiency of these other languages, but it doesn't seem like the sort of deficiency that would lead one into philosophical error, and it is not a deficiency that existed when the Platonists were writing in any case.

Replies from: cousin_it
comment by cousin_it · 2009-05-17T17:50:31.559Z · LW(p) · GW(p)

I sympathize with your idea of amplification of random differences, but will try to persuade you that "strong Sapir-Whorf", even if false per se, might still contain a grain of merit.

Any language co-evolves with its culture - a sort of definite integral over their historical path up to now. The English language is not wholly determined by its grammar and vocabulary: if you randomly generate many grammatical and meaningful phrases in English, a lot of them will still sound "wrong" because they don't correspond to multi-word frequency patterns of everyday English use. As a culture evolves, those multi-word frequencies change to reflect reality. That's why I find it hard to talk about programming in Russian: I'm missing not just the technical terms, but also the grown connective tissue of commonplace word combinations specialized for programming that would make phrases sound natural and easy.

When you study many languages, like me, you find that every language has its own sweet spots. The experience of reading Pushkin in Russian seems to have no analogies in the whole English language and has resisted all attempts at translation. French postmodernist thought sounds great in French but (as a rule) turns to lousy and phony wordplay in other languages. And you never really get the point of Italian until you try singing in it, suddenly feeling the sounds come more naturally than in your language of birth, whatever it is.

Personally, I find the suggestion that English philosophy is more reasonable than continental... to contain more than a grain of truth. The reason for that may well be a complex interplay between geographical location, anthropology, history, art and feeding it all back into the language; pretty hard to disentangle, but the result is, like some "racist and bigoted" conclusions, quite obvious to see once you start looking.

Replies from: Nominull, Annoyance
comment by Nominull · 2009-05-17T19:09:42.206Z · LW(p) · GW(p)

When you study many languages, like me, you find that every language has its own sweet spots. The experience of reading Pushkin in Russian seems to have no analogies in the whole English language and has resisted all attempts at translation. French postmodernist thought sounds great in French but (as a rule) turns to lousy and phony wordplay in other languages. And you never really get the point of Italian until you try singing in it, suddenly feeling the sounds come more naturally than in your language of birth, whatever it is.

It sounds to me as though this effect can be explained by things being lost in translation, which can happen to any language and is not indicative of a deep fundamental difference between languages. The true test of this theory would be to translate some English poetry into Russian and see if it comes out sounding deeper, or to translate some English postmodernism into French and see if it comes out sounding more authentic. It is actually believable that if you translate an English song into Italian that it will be more aesthetically pleasing - different languages have different patterns of sound, and some patterns of sound fit song better than others. But one rarely sings one's philosophy.

Replies from: cousin_it, Vladimir_Nesov
comment by cousin_it · 2009-05-17T20:06:57.528Z · LW(p) · GW(p)

The true test of this theory would be to translate some English poetry into Russian and see if it comes out sounding deeper, or to translate some English postmodernism into French and see if it comes out sounding more authentic.

It's great that you proposed an experiment. Still, the language might facilitate original expression in a certain form more than it facilitates translation. To control for that effect somewhat you could ask a good Russian poet or French philosopher to do the translation, and I can guess how that will turn out!

Can't say about philosophy, but Russians with good knowledge of English or other languages often prefer good Russian translations of literary works to the originals, despite preferring the originals of technical texts etc. Maybe an artefact of the Soviet era when non-state-sanctioned writers often worked as translators, raising the average quality of translations at the cost of never publishing their own thing. For example, I didn't much enjoy the original text of LOTR compared to the Russian translation by Muravjev/Kistyakovsky. The contrast is especially felt in the poetry: Russian has a much deeper store of rhymes, no one is forced to stick to "lie - die" or "land - hand" as Tolkien had to, and all inconsistencies of meter are also gone - I was actually amazed how many of them there are in the English version, making some verses almost unreadable.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-17T21:56:31.100Z · LW(p) · GW(p)

It's worth noting that some of Tolkien's poems started as written in Quenya, a language that Tolkien designed specifically for poetry - he once said that he created the world of Middle-Earth just so that he could have somewhere that his language was spoken - and it's not surprising if Tolkien's English translations of Elvish and his other languages aren't as good.

comment by Vladimir_Nesov · 2009-05-18T22:04:58.108Z · LW(p) · GW(p)

Nominull, your remarks about language here and above seem off (although my experience is narrower than cousin_it's, since I'm only bilingual, German is coming slowly, with little motivation). Each language has its sound, influencing the way you can use it for different tasks. Of course, you can accurately communicate a deeply understood concept in any language, by describing it redundantly, but that doesn't apply to the sum total of everyday use, in particular to viewing the language as a tool for refining your concepts.

comment by Annoyance · 2009-05-17T18:30:24.411Z · LW(p) · GW(p)

I think it's better in all cases simply to say that strong Sapir-Whorf is wrong, but that weaker versions have some utility as a means to understand how humans think and speak.

comment by kim0 · 2009-05-18T20:55:07.157Z · LW(p) · GW(p)

What exactly makes it difficult to use Russian? I know Russian, so I will understand the explanation.

I find my native Norwegian better to express concepts in than English. If I program something especially difficult, or do some difficult math, physics, or logic, I also find Norwegian better.

However, if I do some easier task, where I have studied it in English, I find it easy to write in English, due to a "cut and paste" effect. I just remember stuff, combine it, and write it down.

Replies from: cousin_it
comment by cousin_it · 2009-05-18T22:18:27.179Z · LW(p) · GW(p)

Whenever I try translating some math or programming stuff from Russian into English or vice versa, the Russian version ends up about 20% longer. Maybe it's because many useful connective words in Russian are polysyllabic, e.g. "kotoryi" (which) ,"chtoby" (to), "poetomu" (so), making sentences with complex logical structure sound clumsy. Translating into Russian always feels like a poetic jigsaw puzzle to make the phrase sound okay, while translating into English feels more anything-goes at the expense of emotional nuance. YMMV.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-18T22:32:12.418Z · LW(p) · GW(p)

It seems that, at least in this usage, English better approximates the ideal expressed in Entropy, and Short Codes:

People have a tendency to talk, and presumably think, at the basic level of categorization - to draw the boundary around "chairs", rather than around the more specific category "recliner", or the more general category "furniture". People are more likely to say "You can sit in that chair" than "You can sit in that recliner" or "You can sit in that furniture".

And it is no coincidence that the word for "chair" contains fewer syllables than either "recliner" or "furniture". Basic-level categories, in general, tend to have short names; and nouns with short names tend to refer to basic-level categories. Not a perfect rule, of course, but a definite tendency. Frequent use goes along with short words; short words go along with frequent use.

comment by thomblake · 2009-05-19T13:47:50.758Z · LW(p) · GW(p)

I've heard that a sizable proportion of German philosophy students read English translations of German philosophers because it's easier to understand.

Also, that Latin was a terrible language for philosophy is oft cited as a reason why philosophy did so much better in Greece than in Rome.

comment by PhilGoetz · 2009-05-17T20:43:28.581Z · LW(p) · GW(p)

Are racist and ignorant synonyms now?