Scientist vs. philosopher on conceptual analysis
post by lukeprog · 2011-09-20T15:10:30.768Z · LW · GW · Legacy · 24 commentsContents
24 comments
In Less Wrong Rationality and Mainstream Philosophy, Conceptual Analysis and Moral Theory, and Pluralistic Moral Reductionism, I suggested that traditional philosophical conceptual analysis often fails to be valuable. Neuroscientist V.S. Ramachandran has recently made some of the points in a polite sparring with philosopher Colin McGinn over Ramachandran's new book The Tell-Tale Brain:
Early in any scientific enterprise, it is best to forge ahead and not get bogged down by semantic distinctions. But “forging ahead” is a concept alien to philosophers, even those as distinguished as McGinn. To a philosopher who demanded that he define consciousness before studying it scientifically, Francis Crick once responded, “My dear chap, there was never a time in the early years of molecular biology when we sat around the table with a bunch of philosophers saying ‘let us define life first.’ We just went out there and found out what it was: a double helix.” In the sciences, definitions often follow, rather than precede, conceptual advances.
24 comments
Comments sorted by top scores.
comment by thomblake · 2011-09-20T16:16:37.155Z · LW(p) · GW(p)
Francis Crick once responded, “My dear chap, there was never a time in the early years of molecular biology when we sat around the table with a bunch of philosophers saying ‘let us define life first.’ We just went out there and found out what it was: a double helix.” In the sciences, definitions often follow, rather than precede, conceptual advances.
This is a great example of a situation where this behavior led to trouble! For "gene" had previously referred to heritable traits, and was after identified with bits of DNA - this led to many people thinking DNA was the only source of heritability.
Replies from: JenniferRM↑ comment by JenniferRM · 2011-09-20T22:03:18.684Z · LW(p) · GW(p)
Moreover, Crick's claim is flatly contradicted by history. Erwin Schrödinger wrote "What Is Life?" in 1944 and predicted that biological information would be stored in aperiodic molecules controlling the unfolding metabolic processes of life. This book was part of the reason there was widespread enthusiasm for the search for these hypothesized molecules, and part of the reason Crick became famous after stealing the status and associated career-supporting resources for the discovery from Rosalind Franklin by looking at her notebooks which contained actual data from actual work and publishing first.
The lesson I've learned from Crick's history is that unethical self-congratulatory blowhards can succeed at the social games of academic science, just as much as they can succeed in other social games, so long as they have victims to steal from. As someone interested in "the effecting of all things possible" who wishes both solid and productive thinking and solid and productive experimental work to be rewarded so that incentives for individual scientists encourage real progress, I consider Crick more of a popularizing semi-parasite than either a "thinker" or a "doer". (Relatedly, see: Stigler's Law.)
Replies from: CronoDAS↑ comment by CronoDAS · 2011-09-21T21:36:46.452Z · LW(p) · GW(p)
"Yes, but when I discovered it, it stayed discovered." - Lawrence Shepp
Replies from: JenniferRM↑ comment by JenniferRM · 2011-09-23T18:30:56.797Z · LW(p) · GW(p)
Indeed, there's a reason for the line "publish or perish". Popularization is important, not just for the publisher but for the world. But in addition to telling people what they have discovered, a scientist can also explain how it was discovered. Credit assignment, within a mind or between minds, is a hard problem, whose solution usually involves increased performance.
My objection with Crick isn't that he didn't get the message out about DNA's double helix structure quite successfully, nor that he didn't illustrate a method for advancing one's scientific career, but that with this quote, his own report of his supposed contribution and methods gives misleading evidence about what actually makes a research program go faster or better. The credibility flows from his presumptive causal role in a revolution in biology based on his fame. In contrast, the best content I know on the subject of learning how to do good research, representing the condensation of enormous volumes of evidence, is an old chestnut:
At Los Alamos I was brought in to run the computing machines which other people had got going, so those scientists and physicists could get back to business. I saw I was a stooge. I saw that although physically I was the same, they were different. And to put the thing bluntly, I was envious. I wanted to know why they were so different from me. I saw Feynman up close. I saw Fermi and Teller. I saw Oppenheimer. I saw Hans Bethe: he was my boss. I saw quite a few very capable people. I became very interested in the difference between those who do and those who might have done.
When I came to Bell Labs, I came into a very productive department. Bode was the department head at the time; Shannon was there, and there were other people. I continued examining the questions,
Why?'' and
What is the difference?'' I continued subsequently by reading biographies, autobiographies, asking people questions such as: ``How did you come to do this?'' I tried to find out what are the differences. And that's what this talk is about.
In that talk, Hamming spends some words on the question of conceptual analysis:
Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you'll never notice the flaws; if you doubt too much you won't get started. It requires a lovely balance. But most great scientists are well aware of why their theories are true and they are also well aware of some slight misfits which don't quite fit and they don't forget it. Darwin writes in his autobiography that he found it necessary to write down every piece of evidence which appeared to contradict his beliefs because otherwise they would disappear from his mind. When you find apparent flaws you've got to be sensitive and keep track of those things, and keep an eye out for how they can be explained or how the theory can be changed to fit them. Those are often the great contributions.
comment by scientism · 2011-09-20T16:57:33.891Z · LW(p) · GW(p)
I think this is quite crude. Obviously scientists don't have to wait for philosophers to get the right definition before they can proceed but scientists often end up barking up the wrong trees because of bad philosophy. It's like Keynes said about economists, "Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist." The same is true of scientists who claim to have no interest in philosophy. (This is especially true in neuroscience where early practitioners had a very keen interest in philosophy and it shaped the whole field.)
There has always been interaction between scientists and philosophers. The current hostility to philosophy has more to do with trends in philosophy (naturalism) than trends in science.
Replies from: Luke_A_Somers, None↑ comment by Luke_A_Somers · 2011-11-02T15:40:11.192Z · LW(p) · GW(p)
scientists often end up barking up the wrong trees because of bad philosophy
Examples, please?
↑ comment by [deleted] · 2011-09-20T17:00:55.199Z · LW(p) · GW(p)
.
Replies from: scientismcomment by Shmi (shminux) · 2011-09-20T17:24:27.318Z · LW(p) · GW(p)
The situation reminds me of the relationship between math and physics. When the necessary mathematical tools are missing, physicists invent the crude new ones to "just get it done", rather than wait for the math to catch up. Then the mathematicians swoop in and refine, polish, beautify and admire them, and scoff at the ugly earlier implements.
Some notable examples that come to mind are calculus, conservation laws, Dirac's delta and bra-ket notation in quantum mechanics, path integrals, renormalization.
Lest some overzealous reader misunderstands my point, I do not intend to badmouth math and mathematicians. The elegant new tools often lead to better understanding of the underlying physical phenomena and consequently new discoveries in physics. The same can rarely be said of philosophy.
Replies from: komponisto↑ comment by komponisto · 2011-09-20T22:00:19.329Z · LW(p) · GW(p)
Some notable examples that come to mind are calculus, conservation laws, Dirac's delta and bra-ket notation in quantum mechanics, path integrals, renormalization.
Calculus is not a fair example, because the disciplines of mathematics and physics were not separated at the time; the work of Newton and Leibniz was the best "mathematics" (as well as "physics") of the era. Noether's theorem is not an example at all but a counterexample: a mathematical theorem, proved by a mathematician, that provided new insights into physics.
Dirac's delta and Feynman path integrals are fair examples of your point.
Replies from: shminux↑ comment by Shmi (shminux) · 2011-09-21T03:46:41.225Z · LW(p) · GW(p)
I concede the calculus point on a technicality :). Certainly there was no clear math/physics separation at the time (and the term physics as currently understood didn't even exist), but the drive to develop the math necessary to solve real-life problems was certainly there, and separate from the drive to do pure mathematics. And it took a long time before the d/dx notation was properly formalized.
As for the Noether's theorem, it was inspired by Einstein proving the energy-momentum tensor conservation in General Relativity, without realizing that it was a special case of a very general principle.
Replies from: komponisto↑ comment by komponisto · 2011-09-21T15:20:32.690Z · LW(p) · GW(p)
Certainly there was no clear math/physics separation at the time (and the term physics as currently understood didn't even exist), but the drive to develop the math necessary to solve real-life problems was certainly there, and separate from the drive to do pure mathematics
Except that the problems in question (explaining the motion of the planets and so on) would not have been considered "real-life problems" back then; rather, they would have been considered "abstract philosophical speculations" that would have carried the same kind of stigma among "practical men" that "pure mathematics" does today.
And it took a long time before the d/dx notation was properly formalized.
I think you mean "justified"; if there was one thing Leibniz was good at, it was formalizing!
As for the Noether's theorem, it was inspired by Einstein proving the energy-momentum tensor conservation in General Relativity, without realizing that it was a special case of a very general principle.
According to your Wikipedia link, it represented the solution to a physical problem in its own right: a paradox wherein conservation appeared to be violated in GR.
comment by Jack · 2011-09-21T15:34:30.384Z · LW(p) · GW(p)
This whole discussion seems needlessly beholden to an unjustified distinction between philosophy and science: that the difference between the two is a matter of kind not of degree. If we call the entire enterprise of mapping the territory 'cartography' then there is a spectrum from applied cartography- which involves mapping particular regions with known and justified methods (stamp collecting at the limit)- to theoretical cartography in which controversial methods are debated, the aim and nature of the enterprise clarified, meta-issues discussed and theories of utmost generality advanced. There are sociological facts about the scientific enterprise that result in the spectrum being divided in arbitrary ways-- certain mapping techniques are only done in certain university buildings, for example. One of these divisions involves putting a large fraction of our most theoretical cartographers in the same building.
The result is your typical Less Wrong poster laughing "Oh you silly philosophers, proven wrong time and time again by experiment. What a worthless enterprise. Why don't you just go out and see what is there?" But condemning philosophers in this way is, I think, a bit like condemning theoretical physicists for not being experimental physicists.
The difference that is usually given is that unlike theoretical physicists, philosophers often don't make falsifiable predictions. But a) at the limit the falsifiability of a theoretical scientist's prediction is a nebulous affair (see, string theory for example) and b) philosophers rarely make claims regarding the outcome of single experiments-- the analog to falsification for a philosophy is something like 'conceptual irrelevance'. A philosophy is 'falsified' when the going scientific paradigm ceases to cohere with the philosophy.
This is not to say the condition of contemporary analytic philosophy departments is a good one. But I think it is historically analogous to something like Tychonian astronomy-- on the precipice of a paradigm shift. In particular the issue is that departments are still in large part filled with pre-Turing thinkers; people that have not internalized the deep changes the cybernetic revolution has made to scientific enterprise. When you see philosophers that do understand computers you routinely see very good work. And it certainly seems to me that that fraction of philosophers is growing. It is no surprise the some of the great, recent philosophical insights have come not from philosophy departments but from CS departments. But this community seems to (or at least be prepared to) arbitrarily accord respect to some thinkers over others merely because of what building they work in. See, for example how everyone here knows who Judea Pearl is but no one knows who James Woodward is. This overly negative, almost reactionary attitude toward 'things labeled philosophy instead of cognitive science' does not seem likely to speed up the paradigm shift. A more proactive attitude that encouraged promising currents within philosophy departments would be more productive.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-22T08:58:15.566Z · LW(p) · GW(p)
But condemning philosophers in this way is, I think, a bit like condemning theoretical physicists for not being experimental physicists.
The difference that is usually given is that unlike theoretical physicists, philosophers often don't make falsifiable predictions. But a) at the limit the falsifiability of a theoretical scientist's prediction is a nebulous affair (see, string theory for example) and b) philosophers rarely make claims regarding the outcome of single experiments-- the analog to falsification for a philosophy is something like 'conceptual irrelevance'. A philosophy is 'falsified' when the going scientific paradigm ceases to cohere with the philosophy.
I don't ask for falsifiable experiments, I ask for reasonably narrow group acknowledgement of which things are more likely than others.
Just as physics has a limited set of acceptable beliefs at each scale of matter, and old ideas have been dropped, I'd hope to see philosophy in the same state. I too don't see a difference in kind, but philosophy seems to have great difficulty as a group saying oops and shedding bad ideas.
This overly negative, almost reactionary attitude toward 'things labeled philosophy instead of cognitive science' does not seem likely to speed up the paradigm shift. A more proactive attitude that encouraged promising currents within philosophy departments would be more productive.
This is a criticism of expressions, not thoughts or even attitudes.
comment by JamesCole · 2011-09-27T08:02:13.971Z · LW(p) · GW(p)
I think there is a problem in the culture of philosophy.
It's seen as generally better to define things up front, as this is seen as being more precise.
That sounds reasonable. Who doesn't want greater precision?
Precision is good when it is possible. But often we don't have a good enough understanding of the phenomena to be precise, and the "precision" that is given is a faux-precision.
Often logic is used to define precise categories, by philosophers examining their concept for X. Then they look at and discuss and argue over the consequences of these definitions.
I think it'd be more appropriate for them to spend more time trying to examine the nature of the instances of X out there (as opposed to the properties of their concept of X), based on a loose notion of 'X' (because at this point they don't really know what X is).
(caveat: I didn't read the pages linked to in this post's description)
comment by lukeprog · 2013-01-13T03:02:33.602Z · LW(p) · GW(p)
Or, Keith Stanovich, in How to Think Straight About Psychology (9e):
A common indication of the essentialist attitude is an obsessive concern about defining the meaning of terms and concepts before the search for knowledge about them begins... In fact, this is exactly the opposite of the way scientists work. Before they begin to investigate the physical world, physicists do not engage in debates about how to use the word energy or whether the word particle really captures the essence of what we mean when we talk about the fundamental constituents of matter.
The meaning of a concept in science is determined after extensive investigation of the phenomena the term relates to, not before such an investigation. The refinement of conceptual terms comes from the interplay of data and theory that is inherent in the scientific progress, not from debates on language usage.
comment by Plasmon · 2011-09-21T07:20:22.162Z · LW(p) · GW(p)
This “forging ahead” behaviour leads to a bunch of concepts that have a nice accepted definition among experts which differs from the common usage among laymen. Consider "energy", "chaos", "evolution", "complexity", .... Ignoring/forgetting/not being aware of the distinction between the common usage and the proper definition is common trap for laymen and a common tactic for pseudo-scientists with an agenda. In my opinion, scientists should invent new words instead of re-defining existing words (quantum physicists are particularly good at this: "quark" was basically a made-up word and no one will confuse "flavour" and "strangeness" with their common macroscopic usages ).
True, scientists may one day find a proper definition for "consciousness", after having studied it, but I do not expect that it will match that which is today called "consciousness" by laymen.
comment by thomblake · 2011-09-21T14:17:51.977Z · LW(p) · GW(p)
Anecdotally, 90% of the time spent on software development is making up good names for things. It seems that even if all philosophers do is make up names for concepts that will be later needed in science, then they're doing important work.
Replies from: Teal_Thanatos↑ comment by Teal_Thanatos · 2011-09-23T00:32:18.455Z · LW(p) · GW(p)
HI, I'm a support software developer. Can I ask where exactly that came from?
Replies from: thomblake, Richard_Kennaway, thomblake↑ comment by Richard_Kennaway · 2011-09-23T13:41:22.974Z · LW(p) · GW(p)
Possibly related is this:
There are only two hard things in Computer Science: cache invalidation and naming things.
Tim Bray, quoting Phil Karlton. This is as close to a definitive source as there is.
↑ comment by thomblake · 2011-09-23T12:50:33.748Z · LW(p) · GW(p)
Urban legend, I think. 87% of all statistics are made up.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2011-11-02T15:43:51.495Z · LW(p) · GW(p)
More than 99% of all statistics are made up... I know - I programmed a 'make up statistics' function which produced a randomly generated statistic every few microseconds, and ran it for a year.
(not really)