Should I take an academic class on rationality?

post by aarongertler · 2014-04-27T21:54:15.336Z · LW · GW · Legacy · 27 comments

Contents

27 comments

This would count toward my major, and if I weren't going to take it, the likely replacement would be a course in experimental/"folk" philosophy. But I'd also like to hear your thoughts on the virtues of academic rationality courses in general.

(The main counterargument, I'd imagine, is that the Sequences cover most of the same material in a more fluid and comprehensible fashion.)

Here is the syllabus: http://www.yale.edu/darwall/PHIL+333+Syllabus.pdf

Other information: I sampled one lecture for the course last year. It was a noncommital discussion of Newcomb's problem, which I found somewhat interesting despite having read most of the LW material on the subject.

When I asked what Omega would do if we activated a random number generator with a 50.01% chance of one-boxing us, the professors didn't dismiss the question as irrelevant, but they also didn't offer any particular answer.

I help run a rationality meetup at Yale, and this seems like a good place to meet interested students. On the other hand, I could just as easily leave flyers around before the class begins.

 

Related question: Could someone quickly sum up what might be meant by the "feminist critique" of rationality, as would be discussed in the course? I've read a few abstracts, but I'm still not sure I know the most important points of these critiques.

27 comments

Comments sorted by top scores.

comment by IlyaShpitser · 2014-04-28T11:22:05.409Z · LW(p) · GW(p)

(Not an expert on academic feminism):

My understanding is that just as LW worries about "corrupted hardware", feminists worry about "corrupted social order." That is, if there are various systematic injustices and power disparities in the social order, and moreover these disparities are difficult for beneficiaries to see, then any product of such a social order, especially one that claims to be impartial, has to be viewed very skeptically indeed, because it likely contains biases inherent in the social order.

Replies from: Vaniver, bogus
comment by Vaniver · 2014-04-28T23:27:11.871Z · LW(p) · GW(p)

I don't think I'm at a position where I could give a statement of the feminist critique that a proponent of it would be happy to call their position, but my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another. That is, the social significance of a statement or concept is more important than whether or not it is concordant with reality.

Replies from: kalium, Richard_Kennaway, bogus, Lumifer
comment by kalium · 2014-04-29T06:29:41.389Z · LW(p) · GW(p)

Subjective perceptions and the relations between humans are also part of reality.

A more charitable phrasing: you view feminism as more concerned with instrumental rationality than with epistemic rationality.

Replies from: Vaniver
comment by Vaniver · 2014-04-29T18:26:15.110Z · LW(p) · GW(p)

Subjective perceptions and the relations between humans are also part of reality.

Of course.

A more charitable phrasing: you view feminism as more concerned with instrumental rationality than with epistemic rationality.

I don't think this is correct, though. My experience has been that in discussions with feminists who critique rationality (FWCR for short),* we have deep disagreements not on the importance of epistemology, but the process and goal of epistemology. If something is correct but hurtful, for example, I might call it true because it is correct while a FWCR would call it false because it is hurtful. (One can find ample examples of this in the arguments for egalitarianism in measurement of socially relevant variables.)

One could argue that they're making the instrumentally rational decision to spread a lie in order to accomplish some goal, or that it's instrumentally rational to engage in coalition politics which involves truth-bending, but this isn't a patrissimo saying "you guys should go out an accomplish things," but a "truth wasn't important anyway."

*I am trying to avoid painting feminism with a broad brush, as not all feminists critique rationality, and it is the anti-rationals in particular on which I want to focus.

Replies from: kalium
comment by kalium · 2014-04-29T20:36:06.309Z · LW(p) · GW(p)

I've never seen this sort of claim, and thought you were talking about, for example, discouraging research on sex differences because people are likely to overinterpret the observations and cause harm as a result. Can you link to an example of the sort of argument you are discussing?

Replies from: Vaniver
comment by Vaniver · 2014-04-29T21:28:32.773Z · LW(p) · GW(p)

thought you were talking about, for example, discouraging research on sex differences because people are likely to overinterpret the observations and cause harm as a result.

I did have this sort of thing in mind. My claim was that I think it also goes deeper. This article (PM me your email address if you don't have access to the PDF) splits the criticism into three primary schools, the first of which begins with the content of scientific theories (i.e. racism, sexism, class bias) and from that concludes that rationality is wrong. An excerpt:

If logic, rationality and objectivity produce such theories, then logic, rationality and objectivity must be at fault and women must search for alternative ways of knowing nature. Such arguments often end up privileging subjectivity, intuition, or a feminine way of knowing characterized by interaction with or identification with, rather than distance from, the object of knowledge.

If I'm reading that paragraph right, that's attributed to Luce Irigaray's 1987 paper.

The second school criticizes the methodology and philosophy of science, and then the third criticizes the funding sources (and the implied methodology) of modern science. The author argues that each has serious weaknesses, and that we need to build a better science to incorporate the critiques (with a handful of practical suggestions along those lines) but that the fundamental project of science as a communal endeavor is sound. Since I think the author of that paper is close to my camp, it may be prudent to follow her references and ensure her interpretation of them is fair.

comment by Richard_Kennaway · 2014-04-29T12:16:22.281Z · LW(p) · GW(p)

my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another.

The subjectivity of our perceptions and how we relate to one another are themselves parts of objective reality.

To steelman the position you're attributing, if philosophy and rationality have been paying too little attention to those parts of objective reality, then they need to focus on those as well as, not instead of, the rest of reality. Or to put that in terms of a concrete example alluded to elsewhere in the thread, nuclear power plants must be designed to be safely operable by real fallible humans.

But they do attend to these things already. Bayesian methods provide objective reasoning about subjective belief. Psychology, not all of which is bunk, deals with (among other things) how we relate to one another. Engineering already deals with human factors.

comment by bogus · 2014-04-29T00:30:49.099Z · LW(p) · GW(p)

my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another.

I'd go even further than that, and state that the very notion of an objective reality onto which we can project our "rational" action without regard for social or moral/ethical factors is somewhat peculiar. It seems to be very much a product of the overall notion of λόγος - variously given the meaning of "argument", "opinion", "reason", "number", "rationality" and even "God" (as in the general idea of a "God's Eye View") - that seems to permeate Western culture.

Needless to say, such "logocentrism" is nowadays viewed quite critically and even ridiculed by postmodernists and feminists, as well as by others who point out that non-Western philosophies often held quite different point of view, even within supposedly "rational" schools of thought. For instance, the Chinese Confucianists and Mohists advocated a "Rectification [i.e. proper use] of Names" as the proper foundation of all rational inquiry, which many in the Western tradition would find quite hard to understand (with some well-deserved exceptions, of course).

Replies from: ChristianKl, gjm, Vaniver
comment by ChristianKl · 2014-04-29T14:45:06.060Z · LW(p) · GW(p)

I don't see why this post is downvoted. When someone asks for an expression of postmodern thought and someone writes a reply to explain it, you shouldn't vote it down because you don't like postmodernism.

comment by gjm · 2014-04-29T17:00:44.299Z · LW(p) · GW(p)

The idea that clarity about language is important is very familiar indeed in the Western philosophical tradition. ("It all depends what you mean by ..." is pretty much a paradigmatic, or even caricatural, philosopher's utterance.) It sounds as if the Confucian notion has a rather different spin on it -- focusing on terminology related to social relationships, with the idea that fixing the terminology will lead to fixing the relationships -- and a bunch of related assumptions not highly favoured among Western analytic philosophers -- but I can't help thinking there's maybe a core of shared ideas there.

It is very possible that I'm overoptimistically reading too much into the terminology, though. Would any Confucian experts like to comment?

comment by Vaniver · 2014-04-29T00:40:52.818Z · LW(p) · GW(p)

The Chinese Confucianists and Mohists, for instance, advocated a "Rectification [i.e. proper use] of Names" as the proper foundation of all rational inquiry

My understanding of this is that it's basically map/territory convergence, with an especial emphasis on social reality- let "the ruler" be the ruler!

comment by Lumifer · 2014-04-29T01:05:04.944Z · LW(p) · GW(p)

overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another.

I hope these people are kept far far away from nuclear plants. And regular factories. And machinery. Actually, far away from any sharp objects would be the best...

Replies from: bogus
comment by bogus · 2014-04-29T01:19:54.100Z · LW(p) · GW(p)

Some might hope that people who do not allow such a concern to be tempered by other concerns, perhaps of a social and moral/ethical nature, should be kept as far away as possible from any of these objects.

After all, even J. R. Oppenheimer discarded his scientific detachment upon witnessing the first nuclear explosion, instead he uttered the famous quote: "Now I am become Death, the destroyer of worlds." (By contrast, a more "rational" person might simply rejoice that his complex calculations predicting that the Earth's atmosphere would not be burned up in the explosion had been proven correct by experimentation!) And Einstein famously regretted his career as a physicist upon learning of these fateful possibilities, stating that if he had known earlier, he would have chosen to be a watchmaker.

Replies from: lfghjkl, Lumifer
comment by lfghjkl · 2014-04-29T07:31:30.580Z · LW(p) · GW(p)

And Einstein famously regretted his career as a physicist upon learning of these fateful possibilities, stating that if he had known earlier, he would have chosen to be a watchmaker.

This is a common misattribution:

http://en.wikiquote.org/wiki/Albert_Einstein#Misattributed

Scroll down to "If only I had known, I should have become a watch-maker."

comment by Lumifer · 2014-04-29T01:44:23.248Z · LW(p) · GW(p)

Some might hope that people who do not allow such a concern to be tempered by other concerns, perhaps of a social and moral/ethical nature, should be kept as far away as possible from any of these objects.

Concern for what's real and what's not should NOT be "tempered by other concerns". I think you're confused between descriptive and normative, aka between what is and what should be.

Besides, while you may turn away from learning, say, what happens when you get a certain amount of U-235 packed together, other people won't. And if at some point later they decide to come and take what used to be yours, well...

Replies from: bogus
comment by bogus · 2014-04-29T08:33:50.047Z · LW(p) · GW(p)

I think you're confused between descriptive and normative, aka between what is and what should be.

These notions are intertwined, rather. "Normative" concerns guide the "descriptive" inquiries we choose to undertake, and provide a criteria for what counts as a "successful" inquiry or experiment. Hume stated that reason should be a slave to passions; by contrast, medieval philosophers viewed "rational" inquiry as a slave to theology, with its cosmology (in the anthropological sense, i. e. what is our "basic, foundational picture", the way we talk about reality?) and morality.

When we forget about these things, we end up with billions being spent in incredibly complicated experiments on supposedly 'foundational' particle physics at the LHC - raising existential risks, such as the possibility of creating a black hole, or a 'strangelet'. Meanwhile we don't see anything near to the same concern about, say, the animals that are neareast to us in the Hominidae group, many of which are significantly endangered in the wild, despite the obvious potential of knowing so much more about "what it means to be human" by keeping them around and studying them more closely. These are not trivial concerns, as implied by the supposed primacy of the 'descriptive'. To treat them as such is quite dangerous.

Replies from: lfghjkl, Lumifer
comment by lfghjkl · 2014-04-29T10:54:09.256Z · LW(p) · GW(p)

When we forget about these things, we end up with billions being spent in incredibly complicated experiments on supposedly 'foundational' particle physics at the LHC - raising existential risks, such as the possibility of creating a black hole, or a 'strangelet'.

This is a common misconception, from Safety of high-energy particle collision experiments on wikipedia:

Claims escalated as commissioning of the LHC drew closer, around 2008–2010. The claimed dangers included the production of stable micro black holes and the creation of hypothetical particles called strangelets,[1] and these questions were explored in the media, on the Internet and at times through the courts.

To address these concerns in the context of the LHC, CERN mandated a group of independent scientists to review these scenarios. In a report issued in 2003, they concluded that, like current particle experiments such as the Relativistic Heavy Ion Collider (RHIC), the LHC particle collisions pose no conceivable threat.[2] A second review of the evidence commissioned by CERN was released in 2008. The report, prepared by a group of physicists affiliated to CERN but not involved in the LHC experiments, reaffirmed the safety of the LHC collisions in light of further research conducted since the 2003 assessment.[3][4] It was reviewed and endorsed by a CERN committee of 20 external scientists and by the Executive Committee of the Division of Particles & Fields of the American Physical Society,[5][6] and was later published in the peer-reviewed Journal of Physics G by the UK Institute of Physics, which also endorsed its conclusions.[3][7]

The report ruled out any doomsday scenario at the LHC, noting that the physical conditions and collision events which exist in the LHC, RHIC and other experiments occur naturally and routinely in the universe without hazardous consequences,[3] including ultra-high-energy cosmic rays observed to impact Earth with energies far higher than those in any man-made collider.

comment by Lumifer · 2014-04-29T14:49:25.645Z · LW(p) · GW(p)

"Normative" concerns guide the "descriptive" inquiries we choose to undertake, and provide a criteria for what counts as a "successful" inquiry or experiment.

Normative concerns guide which inquiries we choose to undertake but they do not (or should not) affect the outcome of these inquiries.

Notably, the normative concerns do NOT provide criteria for success. The cases where such has been attempted -- e.g. Lysenko and genetics in Soviet Russia -- are universally recognized as failures. Richard Feynman had a lot to say about this.

These are not trivial concerns

By which criteria do you divide concerns into "trivial" and not?

Replies from: fubarobfusco
comment by fubarobfusco · 2014-04-30T00:55:07.862Z · LW(p) · GW(p)

Normative concerns guide which inquiries we choose to undertake but they do not (or should not) affect the outcome of these inquiries.

But they also guide what counts as success. If your biology research is aimed at developing new bioweapons, then stumbling upon a cure for cancer does not count as a success.

comment by bogus · 2014-04-28T11:48:42.864Z · LW(p) · GW(p)

Yes, of course. And this is especially concerning because 'rationality', 'winning' and the like are quite clearly not ideologically neutral concepts. They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing - be it of fellow human beings or our natural environment, a real-life symbiote without which our communities cannot possibly thrive or be sustainable.

LessWrong folks like to talk about their pursuit of a "Friendly AI" as a possible escape from this dilemma. But it's not clear at all just how 'friendly' an AI could be to, say, indigenous peoples whose way of life and culture does not contemplate Western technology. As a general rule of thumb, our developments in so-called "rationality" have not been kind to such groups.

Replies from: MathiasZaman, ThisSpaceAvailable
comment by MathiasZaman · 2014-04-28T14:44:17.524Z · LW(p) · GW(p)

They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing - be it of fellow human beings or our natural environment

For someone with a strong interest in or preference towards caring and nurturing, rationality is still very useful. It helps you learn on how to best care for as many people as possible or to nurture as many pandas (or whatever). Caring and nurturing still have win-states, they're just cooperative instead of competitive.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-29T14:46:39.221Z · LW(p) · GW(p)

It helps you learn on how to best care for as many people as possible or to nurture as many pandas (or whatever).

What evidence do you have for that claim? Would that pass objective tests for good evidence?

comment by ThisSpaceAvailable · 2014-05-08T02:52:02.923Z · LW(p) · GW(p)

They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing - be it of fellow human beings or our natural environment, a real-life symbiote without which our communities cannot possibly thrive or be sustainable.

"Winning" means maximizing your utility function. If you think that "care and nurturing" are important, and yet you failed to include them in your utility function, the fault lies with you, not rationality. Complaining about rationality not taking into account care and nurturing is like complaining about your car not taking into account red lights.

LessWrong folks like to talk about their pursuit of a "Friendly AI" as a possible escape from this dilemma.

What dilemma?

But it's not clear at all just how 'friendly' an AI could be to, say, indigenous peoples whose way of life and culture does not contemplate Western technology.

An AI friendly to Western values would be a tool through which Western civilization could enforce it values. If you don't like Western values, then your objection is against Western values, not with the tool used to facilitate them.

As a general rule of thumb, our developments in so-called "rationality" have not been kind to such groups.

I don't find that to be clear. The mistreatment of non-Western people can arguably be attributed to anti-rational positions, and my most measures, most people are better off today than the average person was a thousand years ago.

comment by Vaniver · 2014-04-28T23:16:17.168Z · LW(p) · GW(p)

Generally, I focus on these four reasons to take classes:

  1. It is required for a degree you want.
  2. You want to interact with the professor.
  3. You want to interact with the other students.
  4. You want to have external pressure to complete some tasks.

Some people take classes because they want to learn the subject the class is on, but unless that unpacks into the latter three reasons, there's probably a better way to accomplish it.

As mentioned by others, it looks to me like this class does well on all of those reasons (but I'm going off of your one-lecture impression of the professors). This is probably the best place in Yale to meet interested students for your rationality meetup- and the professors are probably good network hubs for this.

As for feminist critiques of rationality, the syllabus lists the reading right there! This is week 1, and this is week 2. (The first one has limited pages in the preview- I doubt you'll be able to read all 51 pages of the second chapter- but you should be able to find it in the library.)

comment by [deleted] · 2014-04-28T14:41:32.847Z · LW(p) · GW(p)

(The main counterargument, I'd imagine, is that the Sequences cover most of the same material in a more fluid and comprehensible fashion.)

So the course would be an easy boost to your GPA? What's the argument against going then?

Presumably you've already paid for this course; presumably the expectation of value isn't high because the alternative is something you feel the need to use scare quotes for; presumably you'd do well in the class and it would boost valuable metrics.

Given that, I would default to taking the course that's a known possibly-interesting probable-benefit, and only switch if there was a very good argument to take something else.

comment by [deleted] · 2014-04-28T00:22:10.127Z · LW(p) · GW(p)

More interesting students = more chance your claims will be challenged, and where you are mistaken you will have the chance to become less wrong. This is the most value that comes from college during college (the diploma thus job thus pay is the best thing that comes after collge). The chance you will meet good challenges during a class that you cannot predict and do not control is higher than what might happen with your own flyers for your own group. The later will be self-selecting to agreement from the start.

comment by ChristianKl · 2014-04-28T00:42:35.169Z · LW(p) · GW(p)

I help run a rationality meetup at Yale, and this seems like a good place to meet interested students. On the other hand, I could just as easily leave flyers around before the class begins.

Speaking to people in person makes it easier to recruit them to come to your meetup. Having a good relationship with the professor who teaches the course could also come in handy.