Posts
Comments
The mathematical inconsistency between quantum mechanics and general relativity illustrates a key point. Most of the time the hypothesis set for new solutions, rather than being infnite, is null. It is often quite easy to illustrate that every available theory is wrong. Even if we know that our theory is clearly inconsistent with reality, we still keep using it until we come up with something better. Even if General Relativity were contradicted by some experimental discovery in 1963, Einstein would still have been lauded as a scientist for finding a theory that fit more data points that the previous one.
In science, and in a lot of other contexts, simply showing that a theory could be right, is much more important the establishing to any degree of statistical significance that it is right.
There are not an infinite number of possible hypotheses in a great many sensible situations. For example, suppose the question is "who murdered Fred?", because we have already learned that he was murdered. The already known answer: "A human alive at the time he died.", makes the set finite. If we can determine when and where he died, the number of suspects can typically be reduced to dozens or hundreds. Limiting to someone capable of carrying out the means of death may cut 90% of them.
To the extent that "bits" of evidence means things that we don't know yet, the number of bits can be much smaller than suggested. To the extent that "bits" of evidence includes everything we know so far, we all have trillions of bits already in our brains and the minimal number is meaningless.
Einstein didn't come up with General Relativity that way. He didn't even do the hard math himself. He came up with some little truths (e.g. equivalence, speed of light is constant, covariance, must reduce to Newtonian gravity in unexceptional cases), from a handful of results that didn't seem to fit classical theory, and then he found a set of equations that fit.
Newtonian gravity provided heaps of data points and a handful of non-fits. Einstein bootstrapped on prior achievements like Newtonian gravity and special relativity and tweaked them to fit a handful of additional data points better. His confidence came from fitting 100% of the small available data set (something that wasn't clear in the case of the cosmological constant), however small it may have been. The minimum bit hypothesis assumes that all bits are created equal. But they aren't. Some bits advance the cause not at all, some bits advance it a great deal.
Similarly, the 27 bit rule for 100,000,000 people assumes that the bits have equal numbers of people who are yes and no on a question. In fact, some bits are more discriminating than others. "Have you ever been elected to an office that requires a statewide vote or been a Vice President?" (perhaps two bits of information), is going to eliminate 99.9999%+ of potential candidates for President, yet work nearly perfectly to dramatically narrow the field from the 100,000,000 eligible candidates. "Do you want to run for President?", cuts another 90%+ of potential candidates.
Einstein was confident because his bits had greater discriminatory power than other bits of information. There are only so many ways it is logically possible to fit the data he had.
One doesn't have to use irrational arguments to push rationality, but one of the lessons we draw from how people make decisions is that people simply do not make decisions about how to view and understand the world, even a decision to do so rationally, in an entirely rational way. The emotional connection matters as well.
Rational ideas proferred without an emotional counterpart wither. The political landscape is full of people who advanced good, rational programs or policy ideas or views about science that crashed and burned for long periods of time because the audience didn't respond.
Look at the argument of SarahC's original post itself. It isn't a philosophical proof with Boolean logic, it is a testimonial about the emotional benefits of this kind of outlook. This is prefectly valid evidence, even if it is not obtained by a "reasoning process" of deduction. In the same way, I took particular pride when my non-superstitutiously raised daughter won the highest good character award in her elementary school, because it showed that rational thinking isn't inconsistent with good moral character.
While one doesn't want to undermine one's own credibility with the approach one uses to make an argument, it is also important to defuse false inferences in arguments to oppose rationality. One of the false inferences is that rational is synonomous with ammoral. Another is that rational is synonomous with emotionally vacant and unfulfilling. A third is the sense that rationality implies that one use individual reason alone without the benefiit of a social network and context, because that is the character of a lot of activities (e.g. math homework or tax return preparation or logic problems) that are commonly characterized as "rational." Simple anecdote can show that these stereotypes aren't always present. Evidence from a variety of sources can show that these stereotypes are usually inapt.
When one looks at the worldview one chooses for oneself, it isn't enough to argue that rationality gives correct answers, one must establish that if gives answers in a way that allows you to feel good about how your are living your life. Without testimonials and other emotional evidence, you don't establish that there are not hidden costs which you are withholding from the audience for your statement.
Moreover, marketing, in the sense I am using the word is not about "exploiting irrational responses." It is about something much more basic - using words that will convey to the intended audience the message that you actually intend to convey. Care in one's use of words so as to avoid confusion in one's audience is quintessentially consistent with good practice of someone seeking to apply a rational method in philosophy.
"Reason" and "evidence based" are both quite nice words to convey the idea.
I have, and even started to mention it, but figured that I was going too far afield. I think the problem there is that the established meaning of "Bright" as intelligent, overshadows the secondary meaning that is sought. I think "light" as a metaphor is promising, but the word "Bright" in particular, is inapt.
FWIW, I am inclined to think that "rationality" is a bad brand identification for a good thing. Rationality conjures up "Spock" (the Star Trek character) not "Spock" (the compassionate and wise child rearing guru). It puts an emphasis on a very inhuman part of the kind of human being you feel you are becoming.
Whatever it means in your context, as a brand to evangelize to others about its benefits, it is lacking. Better, in the sense of offering a positive vision, perhaps than "atheism" or "secularism" but not still not grounded and humane enough. I like "naturalist" better, although it is loaded with the connotation of bird watching, and also "humanist" although the term, without the modifier "secular" can mean little more than someone who gives a damn. "Enlightened" (as in the Enlightenment era) might be a good term if it weren't so damned arrogant in the modern vernacular.
The sense that I think you are trying to capture of something of the sense conveyed by the title to Carl Sagan's book "Demon Haunted World." You want to convey the joys of having exorcised the demons and opening yourself to seeing the world more clearly. But, to sell it to others, I think it is necessary to find a better marketing plan.
In the mental health area the polar extreme from the pathology model is the "neurodiversity" model. The point about allowing treatment when it is available and effective, whether the treatment is an "enhancement" or a "cure" is also worthwhile.
In the area of obesity, I think we are pretty open, as a society, to letting the evidence guide us. In the area of mental health, we are probably less so, although I do think that empirical evidence about the nature of homosexuality has been decisive in driving a dramatic change in public opinion about it.
A key concept that sums up your analysis, which you call "determinist consequenntialist" is the revelation that you should know why you want to use a word before you define it, and that a word may have different definitions that are appropriate for different contexts. A definition of disease designed to draw a line limiting covered medical expenditures may find an enhancement/pathology treatment distinction useful, while a definition of disease designed to distinguish between whether treatment should be available to those for whom ability to pay is not the issue might not.
"Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago."
Really? When I look at Aquinas or Plato or Aristotle, I see people mostly asking questions that we no longer care about because we have found better ways of dealing with the issues that made those questions worth thinking about.
Scholastic discourse about the Bible or angels makes much less sense when you have a historical-critical context to explain how it emerged in the way that it did, and a canon of contemporaneous secular works to make sense of what was going on in their world at the time.
Philosophical atomism is irrelevant once you've studied modern physics and chemistry.
The notion that we have Platonic a priori knowledge looks pretty silly without a great deal of massaging as we learn more about the mechanism of brain development.
Also, not all new perspectives on the world have value. Continental philosophy and post-modernism are to philosophy what mid-20th century art music is to music composition. It is a rabbit hole that a whole generation of academics got sucked into and wasted their time on. It turned out that the future of worthwhile music was elsewhere, in people like Elvis and the Beatles and rappers and Nashville studios and Motown artists and ressurrections of the greats of the classical and romantic periods in new contexts, and the tone poems and dissonant musics and other academic experiements of that era were just garbage. They lost sight of what music was for, just as the continental philosophers and post-modernist philosophers lost sight of what philosophy was for.
The language in impenatrable because they have nothing to say. I know what it is like to read academic literature, for example, in the sciences or economics, that is impenetrable because it is necessarily so, but that isn't it. People who use sophisticated jargon when it is really necessary are also capable of speaking much more clearly about the essence of what is going on - people like Richard Feynman. But, our modern day philosophical sophisticates are known to no one but each other and are not adding to large understanding. Instead, all of the other disciplines are busy purging themselves of all that dreck so that they can get back on solid ground.
It seems to me that philosophy is most important for refining mere intuitions and bumbling around until we find a rigorous way of posing the questions that are associated with those intuitions. Once you have a well posed question, any old scientist can answer it.
But, philosophy is necessary to turn the undifferentiated mass of unprocessed data and potential ideas into something that is succeptible to being examined.
Rationality is all fine and good, but reason applies known facts and axioms with accepted logical relationships to reach conclusions.
The importance of hypothesis generation is much underappreciated by scientists, but critical to the enterprise, and to generate a hypothesis, one needs intuition as much as reason.
Genius, meanwhile, comes from being able to intuitively generate a hypothesis the nobody else would, breaking the mold of others intuitions, and building new conceptual structures from which to generate novel intuitive hypothesises and eventually to formulate the conceptual structure well enough that it can be turned over to the rationalists.
"3. Philosophy has grown into an abnormally backward-looking discipline."
Indeed. One of the salutory roles that philosophy served until about the 18th century (think e.g. "natural philosophy") was to serve as an intellectual context within new disciplines could emerge and new problems could be formulated into coherent complexes of issues that became their own academic disciplines.
In a world where cosmology and quantum physics and neuroscience and statistics and scientific research methods and psychology and "law and whatever" are vibrant we don't need philosophers to deal with metaphysics and epistomology, but we may need considerable more philosophical attention to questions like "what about a book has value?", or "what obligations do people have to each other in an unequal society?," or "what does it mean to be human?"
One of philosophy's main cutting edge agendas should be formulating new questions to ask and serving as an incubator from which to outline the boundaries of new disciplines of specialists to answer those questions.
Any summary of the discipline that is looks like an index of the last two thousand years of philosophical thought is probably missing the stuff that philosophers should be spending their time considering.
Alternately, one approach that many academic philosophers seem to be fond of taking is to consider themselves to be primarily intelllectual historians, with a particularly rich and subtle tradition to understand so that it can be understood by those who are primarily interested in the history of ideas. In the same way, Freud is a bad place to look for someone interesting in doing clinical psychology, but a good place to look for someone interesting in understanding the conceptual roots of lots of ideas that shaped by lay and professional understanding of the individual mind.